The Fifth Asia-Pacific Conference on Computer-Human Interaction (APCHI 2002) November 1-4 2002.
A Fundamental Study on Error-Corrective Feedback Movement in a Positioning Task Ryo Takagi a Yoshifumi Kitamura b Satoshi Naito b and Fumio Kishino b a
b
Graduate School of Engineering Graduate School of Information Science and Technology Osaka University 2-1, Yamadaoka, Suita-shi, Osaka, Japan
[email protected]:
ABSTRACT In this paper, a positioning task that includes translation and rotation is investigated and experimental results of three movement processes (i.e., initial feedforward movement (IFFM), coarse error-corrective feedback movement (coarse EFBM), and fine error-corrective feedback movement (fine EFBM)) are reported for translation and rotation, and for spatially-coupled style and spatially-decoupled style. From the results of all experiments we found quite different tendencies for the three movement processes. For example, the peak velocity for both translation and rotation in the spatially-coupled style was larger than that in the spatially-decoupled. And the time of the IEEM for translation in the spatially-coupled style was shorter than that in the spatially-decoupled. The time of the coarse EFBM for both translation and rotation in the spatially-coupled style was shorter than that in the spatially-decoupled. There was no significant difference between spatially-coupled and spatially-decoupled styles in the fine EFBM. The implications of these findings for user interface design and evaluation are discussed. Keywords: user interface, virtual environment, positioning task, object manipulation, spatially-coupled, spatiallydecoupled, translation, rotation, movement process
1. INTRODUCTION The research on object manipulation in 3D virtual environments constructed on computer has made progress as an important key technology for applying daily activities in the real world to interfaces with a computer. A positioning task that determines the position and orientation of a virtual object is considered to be one of the fundamental tasks performed in a virtual environment. Therefore, it is necessary to analyze this basic task when considering the design of an intuitive and efficient object manipulation interface. In order to construct more intelligent and advanced user interfaces, it is necessary to utilize a lot of useful knowledge on kinematics in computer-human interaction obtained from careful experiments and investigation. Various motions in the real world have been analyzed for the purpose of understanding the perceptual and kinematic characteristics of human movement, and much knowledge has also been acquired during a positioning task. For example, it was advocated that the movement of a hand from an initial position to a target consists of two processes, i.e., initial feedforward movement (IFFM) and error-corrective feedback movement (EFBM) 1. Based on this knowledge, pointing or positioning tasks on computer have been investigated 2-4. However, when the task requires precise placement at the target position, the EFBM tends to remain for a long time like a long tail. This means that the fine movement is conducted after the relatively coarse movement in the period of EFBM. However, the fine movement has not been specially discussed. For the design of a sophisticated user interface that provides a user with accurate spatial translation and rotation, it is necessary to analyze carefully the fine EFBM that exists in the EFBM, and to utilize the knowledge obtained from the analysis. In this paper, we investigate the fine EFBM separately from the coarse EFBM in a positioning task. Findings are presented on these processes of movement, which were obtained from the results of experiments on a positioning task that included translation and rotation in spatially-coupled and spatially-decoupled styles.
2. PREVIOUS WORK Extensive research on the movement of the hand in the real world has been carried out in order to understand human spatial perceptual-motor processes. For example, it was advocated that the characteristic features of the movement of a hand from an initial position to a target consisted of two processes, i.e., IFFM and EFBM 1. The IFFM denotes the initial planned movement according to the distance to a target, and the EFBM denotes the deceleration movement to the target under real-time control based on visual feedback and so on. a
Currently with City of Osaka Architecture Foundation 1-2-7-1300, Asahicho, Abeno-ku, Osaka-shi, Osaka, Japan
The Fifth Asia-Pacific Conference on Computer-Human Interaction (APCHI 2002) November 1-4 2002.
MacKenzie et al. demonstrated the effects of the width of the target and the distance to a target on the pointing movement in a real space based on the above features 2. Here, the time of the IFFM was defined as the time from the start of the movement to the time when peak velocity appears. And the time of the EFBM was defined as the time from the time of peak velocity to the time when magnitude of velocity first crosses a certain threshold value that shows convergence. These features were also similarly examined in a pointing task in a virtual environment3-5. However, the EFBM tends to remain for a long time like a long tail when the task requires precise placement at the target position. This means that the fine movement is conducted after the relatively coarse movement in the period of EFBM. This fine movement after the coarse EFBM has not been considered in most studies. Therefore, it is necessary to investigate the EFBM by dividing it into two different times, i.e., the time of the coarse EFBM and the time of the fine EFBM. Recently, considerable study has been done on positioning tasks that include not only translation, but also rotation 6-13. Especially, Wang et al. examined translation time and rotation time individually for object manipulation that included both translation and rotation in the spatially-coupled style 11. They demonstrated that the translation time was a determinant of the total task completion time. In Ref. 11, the translation time was defined as the sum of the time of the IFFM and the time of the EFBM for translation movement, and the rotation time was defined as the time of movement while rotation occurs. However, in object manipulation with both translation and rotation, the IFFM and the EFBM have not been examined. Two different coupling styles can be considered for direct manipulation. The first style is the spatially-coupled style, which provides a user with an environment where the control space of the user’s hand is superimposed on the display space of the objects. The second is the spatially-decoupled style, which provides a user with an environment where the control space of the user’s hand is separated from the display space of the objects. Ware and Rose compared the spatially-coupled style with the spatially-decoupled style, in a rotation task in a 3D virtual environment 14; however, a study has not yet been done on both styles for a positioning task that includes both translation and rotation. Although Wang et al. investigated object manipulation that included translation and rotation in the spatially-coupled style 11; this was not done in the spatially-decoupled style. In this paper, we investigate the positioning task that includes translation and rotation using both the spatiallycoupled and the spatially-decoupled style. The experimental results are analyzed by using three movement processes, i.e., IFFM, coarse EFBM, and fine EFBM.
3. METHOD 3.1. Subjects Subjects were twelve male graduate students, who had experience in using a computer. All of the students were righthanded and had normal or corrected-to-normal vision. Before an experimental session, they were provided with a sufficient description of the task. They participated in an experimental session of about one hour’s duration.
3.2. Experimental Setup The experimental setup, shown in Fig. 1, was constructed in order to examine the three movement processes of the spatially-coupled and spatially-decoupled styles in the positioning task. In order to generate a virtual environment and control the experimental configuration, a graphics workstation (Silicon Graphics Inc., Onyx) is used. For displaying the virtual environment, a Sony 17-inch CRT monitor is positioned at a downward angle. A half mirror is placed between the screen and the workspace. A stereoscopic, head-coupled graphic display is presented on the screen and reflected by the mirror. The image (on a black background) is perceived by the subject as if it is below the mirror, on the workspace. The images for the right and left eye are alternately displayed on the Sony monitor and are synchronized with the liquid crystal shutter glasses (Solidray Co., Ltd., SB300) at 96 Hz. The subject wears the liquid crystal shutter glasses to obtain a stereoscopic view of the images that are projected onto the mirror. A magnetic 6 DOF tracker (Polhemus Inc., Fastrak) is fixed to the right side of the glasses to track the position of the eyes at 60 Hz with about 21 ms lag. This information is processed by the SGI Onyx to provide the subject with a real-time, head-coupled view. A controller, which is a wooden cube (3 x 3 x 3 cm), is used as an input device. A magnetic tracker is inside the center of the controller cube. Position and angle information from the magnetic tracker are measured at 60 Hz and this moves a brown graphic cursor cube, which corresponds to the motion of the controller cube with a ratio of one-to-one. The information from the magnetic tracker on the controller cube is recorded for data analysis. The target is a light-blue wireframe graphic cube, which appears on the surface of the workspace when the subject looks into the mirror. The wooden cube is shown as a blue wireframe graphic cube when it is at its start position at the beginning of each trial. The controller, cursor, target and start position cubes have the same size of 3 cm. A mouse operated by subject's left hand is used to control the start and end of a trial.
The Fifth Asia-Pacific Conference on Computer-Human Interaction (APCHI 2002) November 1-4 2002.
Liquid crystal shutter glasses (with Fastrak)
Controller (with Fastrak) Cursor Workspace
timing of peak translation velocity
25 20 15 10
threshold 5 0 0 0.06 0.12 0.19 0.25 0.31 0.37 0.44 0.5 0.56 0.62 0.69 0.75 0.82 0.87 0.94 1 1.06
Half-silvered mirror
30
velocity [cm/s]
17-inch CRT monitor
time [s]
Mouse
Target
IFFM
Figure 1. The experimental system. target
cursor
target
controller
cursor
fine EFBM
Figure 3. The velocity data measured for a typical subject.
target
controller 10 cm
(a) spatially-coupled
coarse EFBM
cursor
controller
10 cm
(b) spatially-decoupled-1 (c) spatially-decoupled-2
Figure 2. The spatially-coupled and spatially-decoupled styles.
The experiment is conducted in a dark room. A light is put by the workspace under the mirror, and so the subject always sees not only the graphic cubes, but also the controller cube and his hand.
3.3. Experimental Design This setup provides three coupling styles: spatially-coupled (Fig. 2(a)), spatially-decoupled -1 (Fig. 2(b)) and spatiallydecoupled -2 (Fig. 2(c)). In the spatially-coupled style, the cursor cube is superimposed on the controller cube and the start position and the target cube are aligned with the midline of the subject's body. In the spatially-decoupled style, for spatially-decoupled -1 the cursor cube is located 10 cm to the left of the controller cube and the start and the target cube are aligned with the midline of the subject's body. For spatially-decoupled -2, the start position and the target cube are aligned 10 cm to the left of the midline of the subject's body. These two spatially-decoupled styles are set up in order to investigate the effects caused by the control position of the hand and by the display position. In every coupling style, the target cube is located either 5 cm or 10 cm away from the start position of the cursor cube, with an angle rotated either 30 deg. or 60 deg. clockwise about a vertical axis of the workspace. In summary, these manipulations provide a balanced experimental design of 3 coupling styles x 2 target distances x 2 target angles. The trials are counterbalanced across subjects in terms of coupling style. The target distance and angle are randomized over trials. For each experimental condition, twelve trials are conducted.
3.4. Task The task is to match the position and angle of the cursor cube to those of the target cube as fast and as accurately as possible. The subject holds the controller cube on the workspace with the right hand. The left hand of the subject is placed on the mouse and gets ready to click a left button. To start a trial, the subject accurately superimposes the cursor cube on the start position cube. Then, when the subject presses the left mouse button, the target cube appears at one of two positions randomly selected from 2 distances and 2 angles. The subject manipulates the controller cube to match the cursor cube to the target cube as fast and as accurately as possible. When the subject is satisfied with the match of the position and angle, he presses the left mouse button to end the trial. If the central distance error between cursor and target is larger than 0.5 cm or the angle error is larger than 5 deg. the trial is deemed a failure trial and uncounted. Trials are blocked according to the 3 coupling styles. At the beginning of each block, the subject is given 48 trials for practice. The order of the target position and angle are randomly generated. 12 trials are repeated in each experimental condition.
The Fifth Asia-Pacific Conference on Computer-Human Interaction (APCHI 2002) November 1-4 2002.
Table 1. The analyzed data. Analyzed data [unit] Peak translation velocity (PTV) [cm/s] Peak rotation velocity (PRV) [deg./s] Difference between translation start and rotation start [s] Translation acceleration time (TAT) [s] (time of translation IFFM) Rotation acceleration time (RAT) [s] (time of rotation IFFM) Translation deceleration time (TDT) [s] (time of translation coarse EFBM) Rotation deceleration time (RDT) [s] (time of rotation coarse EFBM) Translation fine error-corrective time (TFT) [s] Rotation fine error-corrective time (RFT) [s] Initial translation time (ITT) [s] Initial rotation time (IRT) [s] Translation time (TT) [s] Rotation time (RT) [s]
Definition Amplitude of maximum translation velocity required from translation velocity data. Amplitude of maximum rotation velocity required from rotation velocity data. Difference between the time that exceeds the threshold of translation velocity first (translation start time) and the time that exceeds the threshold of rotation velocity first (rotation start time). Time from the translation start time to the time of the peak translation velocity. Time from the rotation start time to the time of the peak rotation velocity. Time from the time of peak translation velocity to the first time that is less than the threshold of translation velocity (initial translation convergence time). Time from the time of peak rotation velocity to the first time that is less than the threshold of rotation velocity (initial rotation convergence time). Time from the initial translation convergence time to the last time that is less than the threshold of translation velocity. Time from the initial rotation convergence time to the last time that is less than the threshold of rotation velocity. Translation acceleration time + Translation deceleration time Rotation acceleration time + Rotation deceleration time Translation acceleration time + Translation deceleration time + Translation fine corrective time Rotation acceleration time + Rotation deceleration time + Rotation fine corrective time
3.5. Data Analysis The position and rotation data are smoothed with a 6 Hz low-pass second-order Butterworth filter in order to remove vibrations of the magnetic tracker and tremor from the hand movement 15. The smoothed position and angle data are differentiated with a three-point central finite difference method. Then, as shown in Fig. 3, a translation velocity is obtained as a transition of the position from three axes, and a rotation velocity is obtained as a transition of a clockwise angle about only a vertical axis of the workspace separately. The start and the end of the movement are determined based on thresholds of velocity of 1.18 cm/s or 8.47 deg./s. The means and standard errors of the data listed in Table 1 are calculated. ANOVAs are performed on the balanced design of 3 coupling styles x 2 target distances x 2 target angles with repeated measures for each subject. As a post hoc test, Tukey’s method is used (alpha level is set at 5%).
4. RESULTS 4.1. Peak Velocity The means and standard errors of the PTV for each condition are shown in Fig. 4. The main effects of the coupling style (F(2,22) = 20.39, p < 0.001) and of the target distance (F(1,11) = 198.27, p < 0.001) were significant. There was a significant interaction between target distance and target angle (F(1,11) = 12.29, p < 0.01). As for the coupling style, there was significant difference between the spatially-coupled and spatially-decoupled style, but there was no significant difference between spatially-decoupled -1 and -2. The means and standard errors of the PRV for each condition are shown in Fig. 5. The main effects of the coupling style (F(2,22) = 6.82, p < 0.01) and of the target angle (F(1,11) = 65.34, p < 0.001) were significant. There was a significant interaction between coupling style and target angle (F(2,22) = 4.69, p < 0.05). In the coupling style, there was significant difference between the spatially-coupled style and spatially-decoupled style, but there was no significant difference between spatially-decoupled -1 and -2.
4.2. Time Measures The means and standard errors of execution time in each process of the movement for each condition are shown in Fig. 6. There was no significant main effect on the start time. Significant interaction was seen only between coupling style and target angle (F(2,22) = 3.54, p < 0.05). On the TAT, the main effects of the coupling style (F(2,22) = 3.46, p < 0.05), of the target distance (F(1,11) = 16.97, p < 0.001) and of the target angle (F(1,11) = 7.29, p < 0.05) were significant. There was no significant interaction. On the RAT, only the main effect of the target angle (F(1,11) = 69.53, p < 0.001) was significant. On the TDT, the main effects of the coupling style (F(2,22) = 8.29, p < 0.01), of the target distance (F(1,11) = 90.64, p < 0.001) and of the target angle (F(1,11) = 34.20, p < 0.01) were significant. There was no significant interaction.
35
250 Peak rotation velocity [deg./
Peak translation velocity [cm/
The Fifth Asia-Pacific Conference on Computer-Human Interaction (APCHI 2002) November 1-4 2002.
30 25 20 15
5cm
10cm
5cm
Spatially-coupled
10cm
Spatially-decoupled -1
5cm
60deg.30deg.60deg.30deg.60deg.30deg.60deg.30deg.60deg.30deg.60deg.30deg.
10cm 10cm 5cm 5cm 10cm 10cm 5cm 5cm 10cm 10cm 5cm 5cm
Spatially-coupled
0.0
0.2
30 deg. 60 deg. 30 deg. 60 deg. 30 deg. 60 deg. 30 deg. 60 deg. 30 deg. 60 deg. 30 deg. 60 deg.
10cm
5cm
Spatially-decoupled -2
0.4
10cm
Spatially-coupled
Figure 4. The peak translation velocity for each condition.
Spatially-decoupled -1
150
100
30 deg. 60 deg. 30 deg. 60 deg. 30 deg. 60 deg. 30 deg. 60 deg. 30 deg. 60 deg. 30 deg. 60 deg.
Spatially-decoupled -2
200
0.6
5cm
10cm
Spatially-decoupled -1
5cm
10cm
Spatially-decoupled -2
Figure 5. The peak rotation velocity for each condition. Time [s] 0.8
1.0
1.2
1.4
1.6
Trans. Rot. Trans. Rot. Trans. Rot. Trans. Rot. Trans. Rot. Trans. Rot. Trans. Rot. Trans. Rot. Trans. Rot. Trans. Rot. Trans. Rot. Trans. Rot. N o movement
Acceleration time
Deceleration time
F ine error- correction time
Figure 6. The time of movement for each condition.
On the RDT, the main effects of the coupling style (F(2,22) = 3.98, p < 0.05), of the target distance (F(1,11) = 5.26, p < 0.05) and of the target angle (F(1,11) = 212.68, p < 0.001) were significant. Significant interaction between coupling style and target angle was seen (F(2,22) = 3.96, p < 0.05). On the TFT, there was no significant main effect. Significant interactions were seen between coupling style and target angle (F(2,22) = 4.92, p < 0.05) and on target distance and target angle (F(1,11) = 4.98, p < 0.05). On the RFT, only the main effect of the target distance (F(1,11) = 35.06, p < 0.001) was significant. On the ITT, the main effects of the coupling style (F(2,22) = 14.77, p < 0.001), of the target distance (F(1,11) = 219.51, p < 0.001) and of the target angle (F(1,11) = 31.29, p < 0.001) were significant. Significant interaction on target distance and target angle was seen (F(1,11) = 10.25, p < 0.01). On the IRT, the main effects of the coupling style (F(2,22) = 7.09, p < 0.01), of the target distance (F(1,11) = 5.90, p < 0.05) and of the target angle (F(1,11) = 321.6, p < 0.001) were significant. Significant interaction on coupling style and target angle was seen (F(2,22) = 4.01, p < 0.05). On the TT, the main effects of the coupling style (F(2,22) = 6.24, p < 0.01), of the target distance (F(1,11) = 72.10, p < 0.001) and of the target angle (F(1,11) = 12.92, p < 0.01) were significant. Significant interactions were seen on
The Fifth Asia-Pacific Conference on Computer-Human Interaction (APCHI 2002) November 1-4 2002.
coupling style and target angle (F(2,22) = 8.60, p < 0.01) and on target distance and target angle (F(1,11) = 12.51, p < 0.01). On the RT, the main effects of the target distance (F(1,11) = 24.25, p < 0.001) and of the target angle (F(1,11) = 7.52, p < 0.05) were significant. There was no significant interaction.
5. DISCUSSION 5.1. Movement Processes In the positioning task, three movement processes, i.e., the IFFM, the coarse EFBM, and the fine EFBM, are examined by using the obtained results. 5.1.1 Initial feedforward movement There is no difference in effects between spatially-decoupled -1 and -2 on the peak velocity and the acceleration time for translation and rotation. The results showed that the peak velocity and the acceleration time were not affected by the difference of the position of the virtual object or of the arm in this experiment condition. Next, we look at the difference between spatially-coupled and spatially-decoupled style. It was found that the PTV and PRV of the spatially-coupled style becomes larger than those of the spatially-decoupled. Graham et al. show that the IFFM is similar for both the spatially-coupled style and the real world 4, and the sense of object manipulation in the spatially-coupled style is similar to that in the real world. Therefore, users have much experience of manipulation in the spatially-coupled style. In contrast, they have little experience in the spatially-decoupled style. It is thought that users cannot produce the output of velocity corresponding to the distance and the angle of a target that is perceived first in the spatially-decoupled style compared with in the spatially-coupled. Although TAT in the spatially-decoupled style became longer than that in the spatially-coupled, difference in RAT dependent on coupling style was not seen. The translation movement is suppressed by both the speed and time domain, whereas the rotation movement is suppressed by only the speed domain. It can be interpreted that the translation movement is performed with careful planning, but that the rotation movement does not bring about the same level of prudence. Pratt et al. showed that practice improved the IFFM 16, and it may be that the subjects had much experience of translation manipulation, for example mouse operation, but that they had less experience of rotation manipulation. PTV was mainly influenced by the target distance, whereas PRV was mainly influenced by the target angle. Also, it was confirmed that TAT became long when the target distance became large, and that RAT became long when the target angle became large. These results show that the output of velocity and the time that is spent on acceleration for each translation and rotation are changed in correspondence with the target distance and target angle that are perceived first. This is similar to the finding obtained on translation by Graham et al 4. Moreover, it can be considered that translation and rotation are mostly carried out independently by IFFM. This independency of translation and rotation is the same as the suggestion of Brooks et al. 17. Of course, it was also seen that there is interaction of the target distance and the target angle on the PTV and that TAT became short when the target angle became large. However, since for the PTV, the difference of the means is small and only two kinds were set up for target distance and target angle in this experiment, these effects may be ignored. As for the TAT, since the rotation direction to the axis is the same direction as the translation direction in this experiment and there is an effect of rotation speed plus translation speed, this result may arise. In addition, the interaction of the coupling style and target angle was seen in PRV. In a small rotation, users may produce the output of the rotation velocity according to the first perceived target angle regardless of the coupling style. 5.1.2 Coarse error-corrective feedback movement For the deceleration time, there is no difference in effects between the spatially-decoupled -1 and -2. The influence of the position of a virtual object and of an arm is small here. The difference between the spatially-coupled and spatiallydecoupled style was investigated, and it was found that the TDT and RDT in the spatially-decoupled style became longer than those in the spatially-coupled. This result shows that coarse EFBM in the spatially-decoupled style is more difficult to perform than in the spatially-coupled. This finding is supported by the research which suggests the importance of the consistency between visual feedback and haptic feedback 12. It was confirmed that TDT and RDT are influenced by both target distance and target angle. This means that the translation and rotation movements are dependent on each other, and that the relationship of interdependence of translation and rotation that Wang et al. suggested 11 appears in the coarse EFBM. In RDT, although there was less difference according to coupling style when the target angle was small, the RDT in the spatially-decoupled style became long compared with that in spatially-coupled when the target angle became large. As mentioned above, the coarse EFBM is difficult to perform in the spatially-decoupled style. In particular, it is thought that this movement becomes more difficult as the amount of rotations increases.
The Fifth Asia-Pacific Conference on Computer-Human Interaction (APCHI 2002) November 1-4 2002.
5.1.3. Fine error-corrective feedback movement Coupling styles, target distances and the target angles were not seen to have a great effect on TFT. It is considered that this part of the translation movement is not influenced by each condition, and that the remaining fine EFBM is mostly performed based on visual feedback. However, the interaction between a coupling style and a target angle and the interaction between a target distance and a target angle were seen. In this experimental setup, there may be a relationship between the position or angle of the target and the position of subject’s hand or the angle of his wrist. However, the explanation for this relationship is yet unknown and further study is required. For RFT, the time became long when target distance became long and the effects of the other factors were not seen. As for the result in which the effect of target distance is great, fine EFBM of rotation is performed corresponding to the target distance. It is considered that RFT increases if distance to the target is longer. Moreover, RFT was much longer than TFT. That is to say, it is difficult to perform simultaneous fine EFBM of translation and rotation intentionally, and after performing fine EFBM of translation first, a fine EFBM of rotation may be performed. In these results, although systematic effects of fine EFBM like those of IFFM or coarse EFBM are not seen, the time of the fine EFBM is very long. Therefore, it is important to examine the fine EFBM.
5.2. Translation Time and Rotation Time Wang et al. show that, in the spatially-coupled style, translation requires a longer time than rotation, and that the translation time is the determinant of the task completion time11-13. ITT and IRT were found to have the same relationship with their results. However, this relationship differs for the TT and RT containing fine error-corrective time, and the RT was the determinant of the task completion time. Ware demonstrates that the completion time of a positioning task in the spatially-decoupled style is longer than that for a translation only task for the same distance and almost the same as for a rotation only task for the same target angle 7. His result shows that the rotation movement is the determinant of the task completion time and our result supports his result. Also, there was no difference in effect between the spatially-coupled and spatially-decoupled style on the relationship of translation time and rotation time. Under various target conditions, Wang et al. show the relationship in which IRT is contained in ITT11. However, according to their experiment result, rotational movement is started early in comparison with translation movement. Furthermore, when the target distance is 5 cm and the target angle is 60 deg., it was found that the ITT was clearly longer than the IRT. That is, the translation time and rotation time for target conditions change in correspondence with the balance of the target distance and target angle. Moreover, in the state where the user recognizes the rotation direction before starting each trial, as in this experiment, since IRT may be longer than ITT, if it is difficult for user to recognize the optimal rotation direction before starting the trial, the need for further rotation time may be expected. Further examination that includes the aspect of mental rotation 18 is required.
5.3. Implications for User Interface Design In the case of the design or evaluation of user interfaces, this information can also express the perceptual characteristic of human movement very effectively. For example, since the peak velocity showing IFFM and acceleration time reflects the target distance and angle perceived just before movement, it can be used for checking whether the user has performed a satisfactory movement. This could be used to evaluate the vague intuitive index. Moreover, if the target distance and the angle which are perceived first appropriately reflect the peak velocity showing IFFM or acceleration time, it may be possible to predict the position or orientation of a target from a slight motion. Coarse EFBM is a process in which various elements, such as distance and angle of a target, visual and haptic feedback, and interdependence of translation and rotation become entangled, and which is then decelerated. Therefore, it is thought that the coarse EFBM goes across the element which increases or decreases this time very variably. Therefore, shortening the time of this part of the process will become a key method of raising the efficiency of object manipulation. Fine EFBM is a process mainly based on visual feedback. Shortening of this part may be possible by raising the part of it, which is linked to change s in color, when the aim of a certain task is being achieved. The time taken for object manipulation was clearly shorter in the spatially-coupled style than in the spatiallydecoupled. Rotation time takes longer in both the spatially-coupled and spatially-decoupled style. If intuitive and efficient manipulation is required, it is better that the spatially-coupled style be used. However, generally speaking, the spatially-decoupled style must be used in many cases. Based on the environment, a design according with the purpose of a task must be implemented and efficiency must be raised. For example, when importance is attached to the rotation manipulation, a solution is needed, which will decrease the time taken for rotation manipulation. In this case, it is considered that one method to achieve this is to change the gain of rotation between a controller and a cursor 19. Although there has been a comparatively large amount of research on the gain of translation and it has been found that it is effective to apply an approximately twofold fixed gain to translation, if a gain is changed dynamically, it will conversely indicate a worsening performance 20,21. According to our results, the characteristic feature of translation and
The Fifth Asia-Pacific Conference on Computer-Human Interaction (APCHI 2002) November 1-4 2002.
rotation is different, but there are also many similarities. Therefore, it may be possible to apply the knowledge acquired on translation to rotation and examine the results. Just such an examination will be required in the future.
6. CONCLUSIONS We found quite different tendencies for the three movement processes. For example, the peak velocity for both translation and rotation in the spatially-coupled style was larger than that in the spatially-decoupled. And the time of the IFFM for the translation in the spatially-coupled style was shorter than that in the spatially-decoupled. The time of the coarse EFBM for both translation and rotation in the spatially-coupled style was shorter than that in the spatiallydecoupled. There was no significant difference between spatially-coupled style and spatially-decoupled styles in the fine EFBM. When designing or evaluating user interface systems, investigating the processes of movement of the hand is very useful and provides information on the user’s perceptual characteristic of movement. Although systematic effects of fine EFBM on targets are not as strong as those of IFFM or coarse EFBM, the time of the fine EFBM was long. The rotation movement was the determinant of the task completion time in the positioning task regardless of the coupling styles.
ACKNOWLEDGMENTS A part of this research was supported by Grant-in-Aid for Scientific Research (B2)(2) 13480104 from the Japan Society for the Promotion of Science.
REFERENCES 1. R. Woodworth: “The accuracy of voluntary movement,” Psychological Review Monograph Supplement, 3, 1899. 2. C. L. MacKenzie, R. G. Marteniuk, C. Dugas, B. Eickmeier: “Three-dimensional movement trajectories in Fitts’ task: implications for motor control,” The Quarterly Journal of Experimental Psychology, 39A, pp. 629-647, 1987. 3. E. Graham, C. L. MacKenzie: “Pointing on a computer display,” in Proc. of ACM CHI ’95 Conference Companion on Human Factors in Computing Systems, pp. 314 – 315, 1995. 4. E. Graham, C. L. MacKenzie: “Physical versus virtual pointing,” in Proc. of ACM CHI ’96 Conference on Human Factors in Computing Systems, pp. 292-299, 1996. 5. N. Walker, D. E. Meyer, J. B. Smelcer: “Spatial and temporal characteristics of rapid cursor-positioning movements with electromechanical mice in human-computer interaction,” Human Factors, 35-3, pp. 431-458, 1993. 6. C. Ware, D. R. Jessome: “Using the bat: a six-dimensional mouse for object placement,” IEEE Computer Graphics and Applications, pp. 65-70, 1988. 7. C. Ware: “Using hand position for virtual object placement,” Visual Computer, 6-5, pp. 245-253, 1990. 8. M. R. Masliah, P. Milgram: “Measuring the allocation of control in a 6 degree-of-freedom docking experiment,” in Proc. of ACM CHI 2000 Conference on Human Factors in Computing Systems, pp. 25-32, 2000. 9. S. Zhai: “Human performance in six degree of freedom input control,” Doctoral Dissertation, Department of Industrial Engineering, University of Toronto, 1995. 10. S. Zhai, P. Milgram: “Quantifying coordination in multiple dof movement and its application to evaluating 6 dof input devices,” in Proc. of ACM CHI ’98 Conference on Human Factors in Computing Systems, pp. 320-327, 1998. 11. Y. Wang, C. L. MacKenzie, V. A. Summers, K. S. Booth: “The structure of object transportation and orientation in humancomputer interaction,” in Proc. of ACM CHI ’98 Conference on Human Factors in Computing Systems, pp. 312-319, 1998. 12. Y. Wang, C. L. MacKenzie: “Effects of orientation disparity between haptic and graphic displays of object in virtual environments,” in Proc. of INTERACT ‘99, pp. 391-398, 1999. 13. Y. Wang, C. L. MacKenzie: “The role of contextual haptic and visual constraints on object manipulation in virtual environments,” in Proc. of ACM CHI 2000 Conference on Human Factors in Computing Systems, pp. 532-539, 2000. 14. C. Ware, J. Rose: “Rotating virtual objects with real handles,” ACM Transactions on Computer-Human Interaction, 6-2, pp. 162-180, 1999. 15. D. A. Winter: Biomechanics and Motor Control of Human Movement, New York: Wiley, 1990. 16. J. Pratt, R. A. Abrams: “Practice and component submovement: The roles of programming and feedback in rapid aimed limb movements,” Journal of Motor Behavior, 28, pp. 149-156, 1996. 17. F. P. Brooks Jr.: “Grasping reality through illusion: interactive graphics serving science,” in Proc. of ACM CHI ’88 Conference on Human Factors in Computing Systems, pp. 1-11, 1988. 18. L. Parsons: “Inability to reason about an object's orientation using an axis and angle of rotation,” Journal of
Experimental Psychology: Human Perception and Performance, 21-6, pp. 1259-1277, 1995.
19. I. Poupyrev, S. Weghorst, S. Fels: “Non-isomorphic 3D rotational techniques,” in Proc. of ACM CHI 2000 Conference on Human Factors in Computing Systems, pp. 540-547, 2000. 20. H. Jellinek, S. Card: “Powermice and user performance,” in Proc. of ACM CHI ’90 Conference on Human Factors in Computing Systems, pp. 213-220, 1990. 21. E. D. Graham: “Virtual pointing on a computer display: non-linear control-display mappings,” in Proc. of Graphics Interface ’96, pp. 39-46, 1996