Getting Feedback from a Miniature Robot - CiteSeerX

2 downloads 0 Views 476KB Size Report
Getting Feedback from a Miniature Robot. Yasser Mohammad. Graduate School of Informatics. Kyoto University. Kyoto Japan [email protected].
Getting Feedback from a Miniature Robot Yasser Mohammad

Toyoaki Nishida

Graduate School of Informatics Kyoto University Kyoto Japan

Graduate School of Informatics Kyoto University Kyoto Japan

[email protected]

[email protected]

ABSTRACT The HRI field of research has gained much attention recently because of the expected importance of well designed interaction modalities with social robots. To achieve natural interaction between the robot and the human, a feedback mechanism from the robot to the human needs to be designed that allows the robot to express its internal state to the human in a natural way. Verbal and nonverbal feedback from humanoid robots or humanoid robotic heads have been widely studied but there is little comparable research about the possible feedback mechanisms of non-humanoid and especially miniature robots. In this paper a comparison between using verbal feedback and motion cues is conducted. The results of the experiment showed that there is no significant difference in the task completion accuracy and time or in the feeling of naturalness between these two modalities and there is a statistically significant improvement when using any of them compared with the no-feedback (control) case. Moreover the subjects selected the motion cues feedback mechanism more frequently as the preferred feedback modality for them.

1.

INTRODUCTION

For Human-Robot Interaction to proceed in a natural way, the robot must be able to understand the human’s intention and to communicate its own internal state or intention to the human through a combined operation named mutual intention formation and maintenance [8] [5]. In a previous work, the authors proposed the interactive perception system as a way to solve the first part of the puzzle, namely getting a hand on the human’s intention through processing sensed signals from her[4]. To solve the remaining part, the robot needs to find a way to communicate its own intention to the human in a natural way. If the robot has a humanoid body or face, the problem becomes how to effectively use the available degrees of freedom to generate believable feedback, but if the robot is a miniature robot that has very limited degrees of freedom the problem be-

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. HRI ’08 Amesterdam, Netherland Copyright 2008 ACM X-XXXXX-XX-X/XX/XX ...$5.00.

comes how to generate enough different feedback patterns using the limited possibilities available while keeping their meaning easily comprehendible by humans. A previous pilot experiment reported in [6] showed that using motion cues as a feedback mechanism can be a promising direction. This paper presents a larger experiment to compare the effectiveness and naturalness of using motion cues as a feedback mechanism with using verbal feedback and with a control no-feedback condition. Many researchers have investigated the feedback modalities available to humanoid robots or humanoid heads [1] [2] [3], but the research about feedback from non-humanoid robots is still limited. In [11], acting in the environment was suggested as a feedback mechanism. For example the robot re-executes a failed operation in the presence of a human to inform him about the reasons it failed to complete this operation in the first place. Although this is an interesting way to transfer information it is limited in use to only communicating failure. Our method of feedback by motion cues can be considered in part as a generalization of the feedback by acting paradigm presented in [11], as it is not limited to failure communication.

2.

DESIGN

The experiment (referred to as the TalkBack experiment in this paper) is designed as a game in which a human is instructing a miniature robot using free hand gestures to help it follow a specific path to a goal in an environment that is projected on the ground. The path is only visible to the human and the robot cannot know it except through the communication with the human operator. Along the path there are some objects of different kinds as well be explained later. Those objects are not seen by the human but can be sensed by the robot using virtual short range infrared sensors, and the only way the human can give the robot the correct signals is if the robot could transfer its knowledge to her/him. The experiment was designed so that the partially observable environment can be converted into a completely observable environment if the human and the robot succeeded in exchanging their knowledge. Fig. 1 illustrates this point.

2.1

Experimental Setup

A map describes both the required path that is projected on the environment and the location of different kinds of virtual objects within it. There are five types of objects that can exist in the Talk-

Figure 1: The TalkBack design principle: Combining Human’s and Robot’s Knowledge leads to a visible environment

Back experiment: • Obstacles that must be avoided • Sand Islands that slow down the robot • Key-Gates that open only after stopping over a key • Keys to open the key-gates • Time Gates that open periodically The obstacles test the ability to send a MUST signal, the sand islands test the ability to send a SHOULD signal, the time gates test the ability to send a time-critical signal, while keys test the ability to send a signal about a location rather than a direction. Three different maps were used in this experiment. An example of them is shown in Fig. 1. Every time the game is started one object of each kind is placed randomly in the virtual environment so that all objects except the key are located along the path (see Fig. 1).

2.2

Feedback signals

To be able to accomplish its mission, the robot should be able to give the following feedback signals to the human: 1. What about this direction? This signal is a suggestion to go in a specific direction. This can happen in many circumstances like when the robot is facing a Key gate, and needs to go in the direction of the key. 2. I cannot pass in this direction. This signal indicates that there is an obstacle or a closed gate in the direction specified that cannot be passed through now. 3. It will be too slow if I pass here. This signal should be given once the robot is about to enter a sand island. 4. I found the key. This signal should be given once the key is found to help the human give the correct gestures in order to move the robot again toward the path. 5. What should I do now? This signal is needed whenever the robot fails to recognize the gesture given by the human or do not have a specific order for a long time. The aforementioned list of feedback signals are not only necessary but also sufficient for achieving effective transfer of the internal state of the robot to the human. The rationale of this assertion lies in the task design of the experiment. The task design involves only moving in 2D space and passing the obstacles and gates. To succeed in doing that, the

human has to give the correct direction and speed to the robot all the time. As the path is only seen by the human, robot’s knowledge cannot contribute to the selection of speed which is a function of the complexity of the path. Hence, the feedback signals need only to specify all possible signals about direction related to the task if the human and the robot can completely understand each other. Let’s assume for now that the complete understandability condition is met. If there are no objects around, the robot needs to say nothing. If it detects an object the exact type of the object needs not be communicated, only the direction to go is relevant. Sending a signal to specify a direction will eliminate the need for a separate signal to specify the existence of objects. There are only three possibilities here: either the robot knows what is the best direction; in this case it will send signal (1) or it just knows that the human suggested direction is not acceptable; so it sends signal (2) or it has a suggested direction that is not guaranteed to be the best direction (sand island case); and signal (3) is just for this last case. In the Key case, the robot should indicate some information about a specific place not a specific direction and this is why there is a separate message for that (signal 4). The discussion given in the last paragraph shows that the first four signals are sufficient in case of complete understandability. If the robot cannot understand the human there must be a fifth signal (signal (5)) to specify this fact, as it is the only fact that is not about the environment. Only one signal is sufficient in this case because it transfers enough information to the human to repeat his command and as the path is restricted this is the only option the human have. Any more signals will be unnecessary in this case. If on the other hand the human cannot understand the robot, the only option for the robot is to repeat the signal (may be with little change in the details) because its choice is restricted by the environment and so no more signals are needed to this case. This concludes the argument that the five signals chosen are not only necessary but sufficient for the specified task of TalkBack. Another reason to believe in the sufficiency of those five signals is that all the experiments conducted were successfully completed in acceptable time (max. 17 minutes).

2.3

Design Decisions and Rationale

Assuming that the robot has no autonomous navigation capabilities, the accuracy and time needed to follow the path will depend on two factors: 1. Recognizing human input 2. Effectiveness and efficiency of robot feedback. As the main objective of this experiment was to study the effectiveness of different feedback modalities in transferring the internal state or intention of the robot to the human, special care had to be taken to control the effects of the first factor above. A simple solution to this problem was to either to use an unnatural modality for transferring the commands to the robot (e.g. a joystick) or to use a predefined set of hand gestures. Both those solutions were not considered effective for the TalkBack experiment because using either of them would have reduced the social expectations of the robot’s communication abilities from the human’s point of view and this in

turn would have reduced the effectiveness of the robot’s feedback and reduced the difference between different modalities. To reduce the effect of errors in recognizing the human commands while keeping the social expectations as high as possible a hidden human operator watches the main operator and transfers his own understanding of the gesture to the robot and the main operator is told that the robot can understand his free gesture commands and will respond to them using the specified modality. It should be clear that the hidden operator here is not controlling the robot (and this is why TalkBack is not a Wizard Of Ooz experiment), but is used by the robot as a replacement of one of its sensors, namely the gesture recognizer. From now on the subject who actually produces the gestures and responds to the robot’s feedback will be called the main operator while the hidden human who transfers the gestures of the main operator to the robot will be called the the Hidden Gesture Recognizer. Using a virtual projected environment was a must in this experiment as it is the only way to make the main operator see the path while not seeing the objects on it.

3.

IMPLEMENTATION

3.1

Experiment Setup

The environment consists of a 60×40cm rectangular working space at which the required path is projected. A camera is used for localizing the robot using a marker attached to it. There are two environmental cameras to capture the working environment and the main operator in real time and a magic mirror behind which the Hidden Gesture Recognizer uses the GT (Gesture Transfer) software to transfer the main operator’s gestures to the robot without being visible. The robot used in this experiment is a miniature robot called e-puck designed by EPFL. The diameter of the robot is 7cm, and is driven by 2 stepper motors in a differential drive arrangement.

3.2

Localization System

Localization of the robot within the environment is very important for the operation of the virtual obstacle sensor. For this reason fusion of two sources of information was used in the TalkBack experiment to robustly and accurately localize the robot: • Dead reckoning using the following simple kinematic model of the differential drive: dx dt dy dt dθ dt

= R × (lef t+right) × cos θ, 2 = R × (lef t+right) × sin θ, 2 R = D × (lef t − right)

where θ is the clockwise measured angle of the robot in the frame of reference defined by the map, R is the radius of the robot wheel, D is the distance between the centers of the two wheels, left is the command given to the left stepper-motor, and right is the command given to the right stepper-motor. Integration of those equations gives the first estimate of robot’s location. For short distances and speeds less than 8cm/sec this approximation is very accurate (within 5mm of the actual location). The approximation becomes less accurate at high speeds and large distances. The direction estimate using the above formulas is better than the

location estimate specially when the robot is rotating in place. • A vision system that is based on a novel fast multielliptic marker detection is used as the second source of information about the robot’s location. The marker used by the vision system consists of two non-cocentric black circles on a white background marker of the same radius as the robot and atteched so that it covers it completely. The vision system was developed by the authors for the TalkBack experiment and can produce accurate estimates of the location of the robot with a maximum error of approximately 10mm. Due to lack of space the details of the algorithm used by this vision system will not be given here. The estimates of these two sources are combined using a linear weighted summation according to the following equations: xf = av × xv + (1 − av ) × xd yf = av × yv + (1 − av ) × yd θf = bv × θv + (1 − bv ) × θd where av and bv are functions of the following variables • Ellipse strength of the outer circle of the marker. This value is determined by the vision system and gives how confident the system is in its position estimate (used to find av ) • Ellipse strength of the inner circle of the marker. This is also determined by the vision system and specifies (with the outer circle ellipse strength) the confidence of the system in its direction estimate (used to find bv ) • The time since the last position estimate. The larger this value is the larger av and bv will be. The value of bv is always smaller than the value of av and empirically bv is saturated at .25 as the direction estimation of the current vision system is not accurate compared with the direction estimate of dead reckoning specially in small periods.

3.3

GT software

The Hidden Gesture Recognizer uses the GT software to send the commands recognized from the free gestures of the main operator to the robot. It should be stressed here that the Hidden Gesture Recognizer in the TalkBack experiment does not control the robot but is utilized by the robot as an external sensor. This software allows the Hidden Gesture Recognizer to send the following kinds of gestures to the robot: • Pointing Gestures (Goto Here). • Direction Gestures (Move in this direction). • Rotation Gestures (Clockwise/Anticlockwise). • Speed Control Gestures (Slower/faster). • Reinforcement Gestures (Yes/No). • Stopping Gesture (Stop). • Continue what you are doing Gesture (Continue).

There are four types of active application-specific components: • Processes: Active components that run continuously. Every process has an attentionality attribute that determines how frequent it is allowed to run by the system. Processes can create, drop, communicate, and change the attentionality of other processes or intentions. processes can also manage the intentionality of registered intentions. normal processes cannot generate actions that affect the robot’s behavior directly. • Reflexes: Special type of processes that have fixed maximum attentionality and that can produce actions that go directly to the actuators. Figure 2: The EICA control architecture

This set of gestures was believed to cover the expected gestures of the human during the real experiment based on an earlier pilot study reported in [6], and this is confirmed by the actual experiments. The gesture command generated using the GUI is not directly fed to the robot but is modified as follows: • With probability 8% the gesture is changed to Unknown to emulate failure to recognize the gesture. • With probability 7% the gesture is converted into another gesture from a probability distribution specific to each gesture command. This probability distribution has 30% degrees to produce the reverse gesture (e.g. faster rather than slower), and the remaining 70% are uniformly distributed over the rest of the gestures with a random parameter selected for parameterized gestures like direction gestures. This processing of the GUI generated signals reduces the accurate gesture recognition to around 85% to emulate an excellent real-world motion detection based gesture recognizer, assuming that the recognition rate of the Hidden Gesture Recognizer is around 100%. By controlling the gesture transfer operation in this way the effect of command understandability could be controlled while keeping enough noise in the input to test the noise rejection properties of each modality.

3.4

Robot software

The control software of the robot runs on a PC and is built using the EICA (Embodied Interactive Control Architecture) architecture in level L0 of the specification as given in [8] and [10]. The EICA architecture is a hybrid reactive-deliberative architecture for real world agents that was developed by the authors to combine autonomy with interaction abilities at the lowest level of the agent’s control system. Fig 2 shows the organizational view of it. Due to lack of space the details of the EICA architecture will not be given here. For details refer to [8], [9], [5], [7] and [10]. The EICA system is constituted of system components that are managed by the architecture and application specific components. The system components are the Intention Function that represents the robot’s current intention [9], and the Action Integrator that fuses the actions suggested by the intentions registered in the intention function to produce the final commands to the robot’s actuators.

• Intentions: Reactive processes that along with attentionality have an intentionality attribute that determine the relative priority of their generated actions. Intentions must be registered to the Intention Function to run and their generated actions are combined by the Action Integrator. • Low Level Emotions: Reactive processes that manage the value of variables called modes according to the context and directly sensed information. Each mode is represented by a normal distribution. EICA’s Low Level Emotions need not correspond to human emotions but are supposed to be specific to the task of the robot.

3.4.1

Perception Processes

The perception processes were designed to reduce the need to duplicate computations in running processes. Two processes encapsulate the localization system as described in section 3.2 • sensorLocation This sensor uses a combination of dead reckoning and vision-base localization combined using the method described in section 3.2 to give the current location and direction of the robot. • sensorSpeed This sensor uses the differentiation of sensorLocation to find the current translational and rotational speed of the robot Two more Perception Processes implemented local sensing of the virtual environment: • sensorsInfra A set of 8 virtual infrared sensors that detect the existence of virtual objects in the environment. The characteristic of those sensors are designed to emulate the built in infrared sensors of the robot. • sensorKey This special sensor gives the direction of the key when the robot is within 3cm from the corresponding key-gate. The input from the main operator (through the Hidden Gesture Recognizer) is sensed by two perception processes: • sensorCommand This process encapsulates the gesture recognition part of the system and in the current version of the experiment it reads the commands from the Hidden Gesture Recognizer and signals the other processes in the system about those commands.

• sensorReinforce This process looks for Yes/No commands from the sensorCommand and adjusts the reinforcement level of the robot based on those commands. This sensor is very important to correct the behavior of the robot when errors occur in gesture recognition (see section 3.3). Those Perception Processes provided the robot with all the needed information about the environment, user commands, and user feedback needed to complete its task and produce the correct signals.

3.4.2

Feedback Signal Conf. Sugg. Resis. Hesit. Sat. What should I do now? 1.0 0 0 0.5 0 I cannot pass in this direction. 0.1 0.8 0 0.8 0 What about this direction? 0 1.0 0 0 0 It will be too slow. 0 0 1.0 1.0 0 I found the key 0 0.9 0 0 1.0 Table 1: Nominal Mode Values for Various Feedback Signals

Behavior Processes

As the robot in this experiment has no skills other than navigating and giving feedback, only two behavior generation processes were needed: • processFeedback This process is responsible of managing the execution of feedback intentions based on the current Low Level Emotional state. • processNavigator This process manages the navigation of the robot in the absence of explicit human commands.

3.4.3

Intentions

The Intentions implement intention-in-action in EICA. In this experiment the robot has only navigation and feedback intentions. Feedback intentions are implemented through a set of five Intentions (planShowXYZ ) corresponding to the feedback modes according to Table 3. Basic navigation intentions are implemented in the planSetDirection intention which rotates the robot to the specified direction and planGoTo which moves the robot to the specified location in the environment. Two other intentions were used to stabilize the operation of the robot: • driveObey This intention manages the intentionality of the intentionShowXYZ family and the attentionality of processNavigator to give the priority to the human command over the navigator as long as the number of virtual collisions is less than a threshold. • driveGotoKey This intention tries to move the robot in the direction of the key once it is detected (without this intention, the no-feedback condition could never be implemented).

3.4.4

Reflexes

Only one reflex (planEmergencyStop) was used to prevent the robot from going out of the environment.

3.4.5

Low Level Emotions

Five modes (see section 3.4) where designed that can generate all the needed feedback signals specified in section 2.2. The input from the hidden gesture recognizer and the internal state adjust the various values of those modes and when they achieve specific patterns a feedback signal is given. The modes used in this experiment are: • Confusion which increases when the robot cannot understand the input from the human or when it encounters an unexpected situation. • Suggestion which increases when the robot wants to suggest a specific direction of movement.

Figure 3: Intentionality Evolution of the Suggestion and Obedience Plans. (1) Approaching an obstacle, (2) User Insists to Collide, (3) Very near to a collision, (4) User Gives Avoidance Command

• Resistance which increases when the command given by the operator will cause a collision from the robot’s point of view. • Hesitation which increases when the robot thinks that the human’s command is not optimal but have no other suggestion. This can happen when a human insists that the robot should pass through a sand island. • Satisfaction which increases when the robot approaches the key. Table 1 gives the nominal values for the emotion vector that triggers various feedback signals. Each one of those modes triggers a specific planShowXYZ based on its degree. The Intentions are combined by the Action Integrator to form the final behavior of the robot. Fig. 3 shows the evolution of the intentionality of the planShowSuggestion intention that generates the actual feedback and the actions generated by the driveObey intention responsible of forcing the robot to follow the human commands. In the beginning the intentionality of the feedback intention was not high enough to generate the feedback until the robot became very near to the obstacle ({1} in the Figure), at a specific point the feedback started, but unfortunately the human did not understand it immediately so (s)he repeatedly gave a command to go in the direction of the obstacle ({2} in the Figure). This caused the intentionality of the obedience generated actions to raise stopping the feedback. This in turn caused a very-near-to-a-collision situation({3} in the Figure) raising the feedback intentionality again generating the feedback until the human finally gave a direction command that caused the robot to avoid the obstacle. As

seen in this example the evolution of the intentionality of different intentions is smooth which causes the robot actions to be more natural and less jerky than traditional systems in which intention switches suddenly.

Modality Confusion

4.

Suggestion rotate to the suggested direction go forward and backward

PROCEDURE

The experiment was done by 22 main subjects (18 males and 6 females) with the following demographical distribution: 8 Asians, 6 Europeans, 8 Africans. The same person served as the Hidden Gesture Recognizer in all the sessions to fix gesture recognition accuracy. Each subject interacted with three feedback modalities: • Stop In this case, the robot just stops whenever it needs to give a feedback. In this mode two simplifications were added to limit the possible time of execution of the experiment. The robot will move in the direction of the key once ordered to rotate to any angle near it, and will stop automatically over the key. Without these simplifications, the job of the main operator to find the location of the key would have been impossible. • Speech In this modality, the robot gives a predefined statement for every kind of feedback signal. Table 2 gives the details of those statements. For speech synthesis, SAPI 5.1 was used with the Microsoft default TTS engine. Modality Confusion Suggestion Resistance Hesitation Satisfaction

Statement I am confused, What should I do I suggest going θ degrees clockwise/counterclockhwise I do not think I should go in that direction I am not sure this is wise I found the key

Table 2: The statements used by the robot in the Speech feedback mode • Motion In this modality, the robot gives a motion feedback according to Table 1. Table 3 gives the actual outputs of various Intentions used to implement the motion feedback modality. Two different motion feedback mechanisms were designed for every mode. The robot starts with the first alternative and then switches to the second alternative if the main operator did not correctly understand the signal after a number of times that is adaptively adjusted based on the history of the interaction with this operator. The order of the modalities was shuffled to counter any correlation between the order and the results. Each map was randomly assigned to seven subjects (map 2 was used with the twenty second subject). Each subject interacted with the same map in the three episodes of the game to reduce the effects of any inherent difference in difficulty between the various maps. After finishing interacting with each modality, the main operator guesses how many objects of each kind was existing, and ranks the robot in terms of its ability to recognize gesture, the understandability of its feedback, and how efficient it was in giving this feedback in a 0-10 scale. The main

First Alternative rotate around, once completed a revolution return back

Resistance rotate to the reverse direction go forward and backward Hesitation rotate few degrees to each direction then back Satisfaction rotate continuously

Second Alternative rotate 30 degrees, go forward and backward, then continue rotating rotate to suggested direction slowly, then rotate back faster rotate to reverse direction slowly, then rotate back faster suggest a different direction stop

Table 3: The Motion Based Feedback mechanisms associated with each Low Level Emotion/Mode.

operator is also asked to rank how enjoyable was the session in the same scale. An extra question about the quality of the sound is added in the Speech modality. The Hidden Gesture Recognizer also ranked his own gesture recognition accuracy after each episode (0-10). After finishing the whole set, the main operator is asked to select his/her preferred feedback modalities and the fastest episode, and to write an optional free comment. The results of those experiments were statistically analyzed to determine the effectiveness of each feedback modality. The results and their discussion are given in the next section.

5.

RESULTS AND DISCUSSION

Eight dimensions of evaluation were used to determine the effectiveness of the three feedback modalities used in the experiment. The first four of them are objective measures based on the task, while the last four are subjective evaluations of the main operator. • Time to completion • Path error. This error is a measure of how different the actual path of the robot was from the desired path. To calculate this value, the location of the robot is logged every 10ms except in the motion feedback periods in the Motion mode. For each logged point, the distance to the required path is calculated, then the point error is averaged over all the points. • Number of Collisions • Correct object Recognitions. How many times the user could recognize the type of the object. • Understandability. How well the main operator understood the robot’s feedback • Recognition. How well the main operator thinks the robot can understand his/her gesture. • Efficiency. How much time is needed to understand the messages from the robot.

Figure 4: Mean of evaluation dimensions for each feedback modality

Modality Motion Sound Stop

Statistic Mean Std. Dev. Mean Std. Dev. Mean Std. Dev.

Time 8.70 3.0356 8.75 2.636 14.81 2.529

Error 12.19 4.165 13.10 3.850 19.01 3.595

Collisions 2.32 1.985 2.23 2.137 7.50 3.320

Ob. Rec. 2.27 1.120 1.36 1.002 1.09 1.109

Understandability 6.50 1.504 5.95 1.618 3.05 1.618

Recognition 6.64 1.529 6.32 1.701 6.59 1.593

Efficiency 5.82 2.039 5.86 1.859 2.64 1.465

Plausability 6.68 1.644 6.32 1.393 3.00 1.902

Table 4: Comparative Statistics of different evaluation dimensions for different feedback modalities • Plausibility. How enjoyable was the experiment in the view of the main operator. The mean, and standard deviation of each of the eight dimension of evaluation were computed and compared. Table 4 gives the values for each modality and the average value across all modalities. Fig. 4 gives a graphical view of the difference in the mean between the three modalities in the eight dimensions. Independent samples T-Test was applied to find the significance of the detected mean differences between the feedback modality and each of the eight evaluation modalities. Table 5 summarizes the most important findings. Both the equal variance assumed and equal variance not assumed versions of the test gave the same results. The table shows the equal variance not assumed version of the values ANOVA analysis, rank-sum test, and K-test all confirmed the results of the T-Test. Due to limited space the details of this further analysis will not be given here. The results of this statistical analysis reveals that the motion and speech modalities were much better than the stop

modality in all dimensions except in the correct object recognition dimension. The reason that there were no significant difference in this modality is due to the fact that the feedback given by the robot just describes its intention and did not try to describe the environment. The mean value for the speech quality was 6.73 which is not good enough but the mean number of repetitions needed before the main operator starts to react was only 1.5 which means that although the subjects ranked the sound quality as poor they could actually understand it. Most subjects selected the motion modality as their preferred modality (14 users compared to only 7 users who preferred the speech modality) while only one subject selected the stop modality although her completion time and error were much higher in the stop modality. The reason given for this exceptional ranking was that the stop modality gave her more sense of control over the robot while in the other moralities she felt that the robot has its own mind. The superiority of the motion modality in this dimension can be attributed to the fact that the cognitive load it gave to

Dimension Time Error Correct Object Id. Collisions Understandability Recognition Efficiency Pleasure

SpeechStop

Suggest Documents