Adding Haptic Feedback to Touch Screens at the Right Time 1
Yi Yang1,2, Yuru Zhang1, Zhu Hou1, Betty Lemaire-Semail2
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University 37 Xueyuan Road, 100191 Beijing, China
{eyang, houzhu_buaa} @me.buaa.edu.cn
[email protected] ABSTRACT The lack of haptics on touch screens often causes errors and user frustration. However, adding haptic feedback to touch screens in order to address this problem needs to be effected at an appropriate stage. In this paper we present two experiments to explore when best to add haptic feedback during the user’s interaction. We separate the interaction process into three stages: Locating, Navigation and Interaction. We compare two points in the Navigation stage in order to establish the optimal time for adding haptic feedback at that stage. We also compare applying haptic feedback at the Navigation stage and the Interaction stage to establish the latest point at which haptic feedback can be added. Combining previous research with our own, we find that the optimal time for applying haptic feedback to the target GUI at the Navigation stage is when the user reaches his destination and that haptic feedback improves user’s performance only if it is added before the Interaction stage. These results should alert designers to the need to take into consideration timing when adding haptic feedback to touch screens.
Categories and Subject Descriptors H5.2 [Information interfaces and presentation]: User Interfaces – graphical user interfaces, haptic I/O, input devices and strategies.
General Terms Design, experimentation, performance.
Keywords Haptic feedback, force feedback, touch screen, timing.
1. INTRODUCTION Touch screens have become more and more attractive userinterface in our daily lives. However, a significant disadvantage of touch screens is their lack of haptic feedback [1, 2]. Users’ attention to visual and audio feedback is weakened if there is no haptic feeling associated with real controls such as buttons, switches, and knobs [3]. Moreover, users’ performance also Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ICMI’11, November 14-18, 2011, Alicante, Spain. Copyright 2011 ACM 978-1-4503-0641-6/11/11...$10.00.
2
L2EP-IRCICA University Lille 1, 50, Avenue Halley, Parc Scientifique de la Haute Borne, 59650 Villeneuve d’Ascq, France
[email protected] deteriorates since users cannot rely on haptic cues to accomplish even the most basic interaction tasks. For example, a range of errors occurs, such as wrong letters, slips and double taps when tapping on a soft keyboard on a PDA [4]. The direct and effective way to solve the problem of lacking haptic feedback is to add artificially rendered haptic feedback when the user operates on the screen. However, a question arises immediately. How should haptic feedback be added so that it will improve users’ performance? Previous research has produced mixed results, showing that sometimes adding haptic feedback was successful and sometimes not (see reviews in section 2). How can we account for these results? We suppose a significant issue in adding haptics is the timing. Regardless of guiding strategies [5], the magnitude of the feedback [5] and even unsatisfactory manufacturing of the prototype [6], an important reason for failures to improve users’ performance by adding haptics to touch screens was because the haptic feedback was not added at an appropriate time. We separate the process of a single interaction between the user and a GUI into three stages, as presented in Figure 1: Locating, Navigation and Interaction. We investigate when to apply haptic feedback to touch screen GUIs through two experiments. The first experiment compares adding the same haptic feedback at the edge and the center of a GUI respectively. This experiment answers the question of whether haptic feedback should be added earlier or later in the Navigation stage. The second experiment compares adding haptic feedback at the Navigation stage and the Interaction stage to prove that haptic feedback can only improve users’ performance before an interaction is implemented. In the following sections, we introduce the three stages in the interaction process and review the literature about adding haptic feedback at these stages. We then describe the force feedback touch screen that we use and present the two experiments. Finally, we discuss about when to add haptic feedback to touch screens.
Figure 1. Three stages in a single interaction process.
2. ADDING HAPTIC FEEDBACK TO TOUCH SCREENS Little research has been undertaken to determine when to add haptic feeback to touch screens. In order to investigate the effect of timing on adding haptic feedback to touch screens, we intentionally separate the interaction process into three stages and categorize previous research into these stages. We find that previous research has yielded different results depending on at which of the three stages haptic feedback has been applied. Stage 1: Locating. A user may wish to interact with a GUI for a particular purpose such as editing a document or viewing a picture. Or he may be reacting to a reminder such as a message pop-up. In either case, he needs to locate a particular GUI at first. At this stage, Haptic feedback can be added to help the user to locate the GUI on touch screens. The haptic enhanced progress bar is a typical example of this. Different patterns caused by the change of frequency [7-9], magnitude [9] and rhythms [8] of vibrations were applied to indicate the state or the speed of an ongoing program. Experimental results showed that, with haptic feedback, participants responded faster to the completion of the program and thus completed the task faster. In these examples, haptic feedback reminded the user to react to a particular event – the end of a program. The event was mapped with a particular GUI. The user reacted to the haptic feedback by interacting with the GUI, such as by clicking directly on the “Done” button [9]. Therefore haptic feedback eliminated the time that the user had to spend on locating the GUI. This effect is similar to that of a twinkling icon which reminds the user of an instant message. Stage 2: Navigation. After the target GUI is located, the user needs to move the input interface (mouse, pen or his finger) towards to the GUI, cross its boundary and finally reach the inside of the target. Adding haptic feedback at this stage to assist the user to reach the target is useful. Steering tasks are commonly used to evaluate users’ performance when they move the input interface towards to the target GUI. Sun et al. [10] investigated multi-model “error feedback” in a circular steering task on a tablet PC. They attached a vibration motor at the end of a stylus to generate vibration whenever the user moved out of a circular tunnel. Compared with visual and audio feedback, users performed most accurately with haptic feedback. However, there was no significant difference between the feedbacks on movement time since the tactile feedback only indicated errors rather than helping the user move in the quickest way. Although not implementing the steering task on touch screens, Dennerlein et al. [11] used a force feedback mouse to apply a force field in the haptic tunnel to pull the cursor to the center of the tunnel. This effect increased movement speed in a liner steering task by 52 %. Campbell et al. [12] compared users’ performance in different feedback conditions with a tactilely enhanced Trackpoint. They created lots of haptic bumps in a circular tunnel so that the user was able to follow the bumps to keep moving along the centre of the tunnel. This effect resulted in a significant faster task completion speed compared with visual only feedback. However, the prerequisite of these effects was that the target was known by the computer [9]. This condition is rare and so haptic feedback is seldom practical. The alternative method is to apply haptic feedbacks to GUIs when the user makes contact with them.
Boundaries are the first items that a user encounters when reaching a GUI. Adding haptic feedback to simulate boundaries of buttons [13], icons or keys [14] is a way of introducing physical properties to GUIs. This technique enables the user to feel the edges of GUIs as if he is touching mechanical buttons or keys. The tangible edges remind the user that he is encountering an icon or a button so that he can slow down or stop moving if the icon or button is the target, thus avoiding errors like sliding over the target [15]. Poupyrev et al. [16] used a piezoelectric-actuated display-glass system to evaluate users’ performance in a reciprocal dragging task with a stylus. Tactile vibration at the boundary improved user performance in the drawing (dragging) task by a statistically significant amount. Similar users’ performance improvement was also found in a text entry task with haptic enhanced key boundaries on a PDA [14] and 22% faster task completion in a liner list selection task with haptic enhanced list item boundaries on a handhold device [17]. In addition, shorter final positioning times were also found when tactile feedback was added at the target boundary in a routine target selection task with a tactilely enhanced mouse [18]. However there is little research comparing adding the same haptic feedback at different times. For example, will adding the same haptic feedback at the boundary (earlier) be better than at the center (later) of the target? It is worth making a comparison to answer this question and find the optimal time to add haptic feedback at this stage. Stage 3: Interaction. This stage is after the user reaches the target GUI. At this stage, interactions like clicking a button and releasing the dragging file are implemented. If haptic feedback is added after an interaction is implemented, then it has little effect on the user’s performance. For example, adding haptic feedback to simulate the button-click feeling has been proved to have no effect on improving task completion speed and error rate in number entering tests [9, 19] and Fitts’ tapping experiment [16, 20]. However, this result also varied according to the coupling type of the input device and the display surface. Forlines et al. [20] found that tactile button-click feeling improved the indirect input condition (by 11.9%) but had no effect on the direct input condition in the Fitts’ pointing experiment. However, in the crossing experiment, tactile feedback after a crossing provided almost no benefit in the indirect input condition but led to about 11% faster selection times in the direct input condition. Summary. We can identify a pattern in the literature we reviewed, showing that users’ performance was improved mostly by adding haptic feedback before Stage 3. This fact indicates that there may be a time limit within which haptic feedback is effective. In addition, we hypothesize that there exists an optimal time in each stage for adding haptic feedback. However, there is little research into the timing of adding haptic feedback to touch screens. Our contribution explores how the effect of the timing of adding haptic feedback to touch screens on users’ performance and offers guidelines accordingly.
3. FORCE FEEDBACK TOUCH SCREEN PROTOTYPE The force feedback touch screen prototype consists of three parts: a 17 inch LCD monitor, an off-the-shelf infrared (IR) touch screen and a FingViewer-I force feedback device. The prototype is presented in Figure 2.
4.2 Task and Stimuli This experiment was designed according to the Task #3 (Dragging Task) described in the ISO 9241 standard, part 9 [22], which is based on Fitts’ experimental paradigm. In the experiment, participants were required to drag an object and drop it in the destination target. Two rectangular stripes were located on the screen (Figure 2). Participants pressed on the starting stripe and maintained continuous contact with the screen to drag the “object”. However, there was no moving object on the display to show the movement of dragging since the finger and the dragging object were collocated. Participants relied on their fingers’ positions to decide if they had reached the target. They could release the object by moving their fingers off the target strip.
Figure 2. The prototype of the force feedback touch screen The FingViewer-I force feedback device is a planar cable-driven haptic device. It is used because of its transparent workspace. It comprises four identical actuator units (Figure 2). Each actuator unit includes a pulley, a DC motor and an optical encoder. The pulley and the encoder are fixed to the draft of the motor. One end of the cable is wound on the pulley. The other end is connected to a plastic thimble. The rotation of the pulley is tracked by the encoder to deduce changes in the length of the cable. Then the position of the thimble (i.e. the index finger) is obtained through calculating the lengths of the cables, i.e. the forward kinematics of the mechanism. The force feedback is realized by controlling cable tensions. When the user moves the thimble and contacts any virtual object, the motors will tense the cables so that cable tensions will increase in order to resist the user’s finger and prevent it from penetrating the virtual object. In this way, the user perceives the force feedback. The FingViewer-I force feedback device can realize 3 Degree-Of-Freedom (DOF) force feedback on a plane (two translation and one rotation), but we only use the two translational force feedbacks in the experiments. The principle of the planar 3 DOF force feedback device can be found in [21]. The IR touch screen is fixed on the frame of the LCD monitor by the four actuator units. The resolution of the IR touch screen is only about 2 mm, which is not sufficient. But its scanning frequency is around 50 to 60 Hz, which is sufficient to detect tapping on the screen. So we use the force feedback device to track the finger position and the IR touch screen to detect tapping on the screen. The position resolution of the force feedback device is 0.1 mm. In the experiments, the LCD monitor is connected with a Lenovo F40M laptop running C++ and OpenGL codes. The resolution of the monitor was set as 1024 × 768 pixels for presentation of the targets. The frequencies of the visual and haptic renderings were 50 Hz and 1000 Hz respectively.
Figure 3. Force feedbacks in Experiment 1. There were three feedback conditions – visual-only feedback (V), force feedback at the edge of the stripe (FE) and force feedback at the center of the stripe (FC). In all of these conditions, the target stripe was displayed in green and the start stripe in grey. The opposite target turned green only upon the successful drop on the target stripe. The force feedback was a constant force perpendicular to the dragging direction. The feedback operated when the finger was in the feedback area with a width of 2 mm. If this area was too narrow, the participant passed by without even feeling it. The magnitude of the force feedback was 0.9 N, which felt like a bump on the screen (Figure 3). The force feedback was added at the edge or the center of the target stripe. Before the experiment, the only instructions to the participants were to complete the task as quickly and accurately as possible.
4.3 Design The experiment was a 3 × 3 × 3 repeated-measures withinsubjects design. The independent variables and levels were as follows: Feedback conditions: V, FE, FC
4. EXPERIMENT ONE: ADDING HAPTICS AT STAGE 2 - EARLY OR LATE? 4.1 Participants 12 participants (3 females and 8 males) between the ages of 20 and 29 were recruited from the laboratory pool. All participants were right-handed. They all had some experience of handling force feedback devices and touch screens but had never operated the force feedback touch screen before. The participants volunteered to participate in the study and were not paid.
Target Amplitude (A): 40, 80, 160 mm Target Width (W): 5, 10, 20 mm The target amplitude and width were selected according to [16]. But the 2 mm target width was replaced by 5 mm since the force feedback area was 2 mm wide at least. Moreover, our input interface was the participant’s index finger instead of a stylus. According to the ISO standard [22], the recommended size of the touch-sensitive area should be at least equal to the breadth of the
index finger distal joint for the ninety-fifth percentile male. The 2 mm width target was too narrow for touch screen operation. The participants were randomly divided into three groups. The order of presentation of the three feedback conditions was counterbalanced using a Latin-squares design between each group. Each participant implemented 2 blocks of trials for each of the feedback conditions. Within each block, participants performed 4 repetitions for each of the 9 A-W combinations, which were presented in random order. Prior to each new feedback condition, participants were given a practice block. Breaks were allowed between blocks and between changes of feedback conditions. But participants had to complete all feedback conditions in one sitting. The entire experiment lasted approximately 30 minutes per participant. To measure and compare the performance of the three feedback conditions, three dependent variables were used: task completion time (ms), error rate (%), throughput (bit/sec). Error rate was the percentage of out-of-target drop. Throughput was calculated with the “effective index of difficulty” as recommended in ISO 9241-9 [22]. In addition, the position where a participant dropped the object was recorded. This variable was defined as offset (mm). 1180
effect on task completion time (F2,22 = 79.3, p < .0001). The posthoc Tukey test showed that participants in the FE and FC conditions completed the task with increased speeds of 7.9% and 9.8% respectively (p < .0001). Although the mean task completion time in the FC condition was less than that in the FE condition, the effect was not statistically significant (p = .058). Figure 5 presents the task completion times in the three feedback conditions versus target widths. In general, visual-only feedback (V) resulted in the longest task completion time. The mean task completion time in the FC condition was shorter than that in the FE condition. The differences between the FE and FC conditions in small target widths (W = 5 mm, 10 mm) were not significant. However, in large target widths (W = 20 mm), participants in the FC condition completed the task faster than in the FE condition. This effect was statistically significant (p < .01).
4.4.2 Error rate The error rates overall were much higher for the FC (mean = 24.4%) and the FE (mean = 22.8%) conditions than for the visualonly condition (mean = 12.7%). The effect was statistically significant (F2,22 = 29.8, p < .0001). However the difference between the FC condition and the FE condition were not significant. The high error rate will be discussed in the following discussion section.
Task completion time (ms)
1160 1140
4.4.3 Throughput
1120
The throughput is the counterpart of the “Index of performance” as defined by Fitts. The measure of throughput took into consideration not only the speed of the performance but also accuracy. In this experiment, the effect of feedback conditions on throughput was not significant (F2,22 = 2.60, p > .05), though the mean throughput of the visual-only condition (mean = 1.92 bit/sec) was greater than those of the FC (mean = 1.83 bit/sec) and the FE condition (mean = 1.80 bit/sec).
1100 1080 1060 1040 1020
V
FE
FC
Feedback condition
Figure 4. Mean task completion time in the dragging task, by feedback condition. V FE FC
1400
Task completion time (ms)
1200
1000
800
600
4.4.4 Offset The offset indicates the distribution of locations where participants dropped the dragging object in the experiment. The feedback conditions had a significant effect on offset (F2,22 = 3.75, p < .05). The offset in the visual-only feedback condition (mean = 0.81 mm) was smaller than in the FE conditions (mean = 1.15 mm). This effect was statistically significant (p < .05). However, there was no statistically significant difference between either the FC (mean = 1.05 mm) and FE conditions or the FC and visual-only conditions.
4.5 Discussion
400
200
0 0
5
10
15
20
25
W (mm)
Figure 5. Mean task completion time in the dragging task, by feedback condition and target width.
4.4 Results 4.4.1 Task completion time. The mean task completion times for the visual-only feedback (V), force feedback at the edge (FE) and force feedback at the center (FC) conditions were 1157 ms, 1066 ms, and 1044 ms respectively (Figure 4). The feedback condition had a significant
In this experiment, the same haptic feedback was applied to two successive locations (edge and center) on the target. The task completion speed was increased by adding haptic feedback. This result proved that adding haptic feedback can improve users’ efficiency in Stage 2. The result can be explained by the fact that participants were assured by the haptic feedback that they had reached the target so that they released the dragging object (lifted their fingers from the screen) quickly without any hesitation. In the visual-only condition, however, participants had to aim at the target carefully in order to find a “safe” place which guaranteed that the object could be released accurately on the target. This took a longer task completion time. It may be noted that we did not inform participants about what the haptic feedback stood for, where the feedback was applied and
how to react to the feedback. We intended to explore the natural reaction of participants. There was no doubt that participants handled the haptic feedback as soon as they perceived such a signal. Nevertheless, how a participant reacted to this signal depended on her/his intention. In this experiment, the only instruction to participants was that they should complete the task as quickly and accurately as possible. As a consequence, not releasing the object until reaching the center of the target was far preferable to releasing the object as soon as encountering the edge of the target. This pattern was proved by the fact that the mean offsets were positive in all feedback conditions. This result indicated that participants were intent to lift their fingers after they reached the center of the target. Thus we were able to explain why adding haptic feedback at the center was better than adding it at the edge. When a participant encountered the haptic feedback at the edge, he had to handle this signal. However, finding that the location where he perceived the signal was not his target (i.e. the center of the target stripe), he would ignore the signal and keep moving until he reached the center. In this process, the movement of the participant was interrupted by the “bump” created by the force feedback. As a result, the time to complete the task was delayed. By contrast, when haptic feedback was applied at the center of the target, the participant did not slow down until he reached the center. Meanwhile, the participant’s intention to stop at the center was enhanced by the haptic feedback. Therefore the participant lifted his finger as soon as he perceived the haptic feedback at the center, thus resulting in a faster task completion. This explanation was supported by the result that the decrease in task completion time in large targets (W = 20 mm) was statistically significant (p < .01). In small targets (W = 5 mm, 10 mm), the decrease was not significant due to the narrow spacing between the edge and center. The overall error rate in the experiment was high, especially when haptic feedback was added. For the visual-only condition, high error rates were also found in other touch interaction research [20, 23]. In general, the errors were a result of selecting small targets. In our experiment, force feedback was applied as a pulse which pulled the finger upward during its lateral movement (Figure 3.(a)). Participants had to balance this force as soon as they perceived it so that they could keep moving horizontally. After the force feedback disappeared, however, participants could not release the force which balanced the force feedback and stop moving their fingers immediately. So they usually lifted their fingers when sliding off the target stripe, especially when the targets were narrow. This effect not only caused errors but also great standard deviation (SDx) which significantly decreased the throughputs. Therefore a properly designed haptic feedback effect is very important even when the haptic feedback is added at the right time. In summary, this experiment has proved that adding haptic feedback at Stage 2 is effective in improving users’ efficiency. Adding haptic feedback at the center of the GUI is slightly better than adding the same feedback at the edge. The haptic effect needs to be improved so that it can act in accordance with the user’s intention.
5. EXPERIMENT TWO: ADDING HAPTIC FEEDBACK AT STAGE 2 AND 3 - EARLY OR NEVER.
5.1 Participants 12 participants (2 females and 10 males) between the ages of 24 and 29 were recruited from the laboratory pool. All participants were right-handed. They all had some experience of handling force feedback devices and touch screens but had never operated the force feedback touch screen before. The participants were volunteers and were not paid. Nine of them participated in Experiment 1.
5.2 Task and Stimuli This experiment was designed according to the Task #1 (Tapping Task) described in the ISO 9241 standard, part 9 [22], which is an imitation of original Fitts’ reciprocal tapping task. The general task was similar to that of Experiment 1. Participants proceeded to tap alternately between the two stripes as quickly and accurately as possible. There were three feedback conditions – visual-only feedback (V), force feedback before the participant tapped on the target (FB) and force feedback after a successful tap on the target (FA). In all three feedback conditions, the target stripe was displayed in green and the start stripe in grey. The opposite target would turn green only upon a successful tap on the target stripe.
Figure 6. Force feedbacks in Experiment 2. The magnitude of the force feedback was 0.9 N in this experiment. According to the experience obtained in Experiment 1, we added the force feedback at the center of the target with a width of 2 mm in the FB condition. Moreover, the force feedback was designed to be resistant to the finger when it moved inside the feedback space. It felt like a wall that helped participants to stop at the center of the target (Figure 6.(a)). It may be noted that the feedback was applied before the participant tapped on the target. When the participant tapped on the screen, there was no force feedback. In the FA condition, force feedback was added after participants tapped successfully on the target. A button-click feeling was simulated by adding a pulse of force which pulled the participant’s finger to move slightly longitudinally (Figure 6.(b)). Similar simulations have been developed by tactile feedback, such as the Touchsense system [24] and an ultrasonic vibration device [25]. The button-click feeling felt like a “crisp click” and was recognized by the participants. Before the experiment, participants were instructed to complete the task as quickly and accurately as possible. However, they were not instructed about how to respond to the force feedbacks. The design of the experiment was identical to that of Experiment 1 except that participants completed three blocks in each feedback condition instead of two as in Experiment 1.
5.3 Results 5.3.1 Task completion time There was a significant effect of feedback conditions on target completion time (F2,22 = 12.9, p < .0001). The mean task completion times in the visual-only condition, the FA condition and the FB condition were 618 ms, 613 ms, and 592 ms respectively. The post-hoc Tukey test was applied to compare differences between feedback conditions. There was no significant difference between the FA condition and the visual-only condition (p > .1). However the FB condition led to a decrease in task completion time by comparison with the visual-only condition and the FA condition (Figure 7). This effect was statistically significant (p < .0001). 630
Task completion time (ms)
620
610
600
590
target. As a result, the error rate decreased significantly. By contrast, adding haptic feedback in the FA condition had no effect on users’ performance compared with the visual-only feedback. This result is understandable. Because the haptic feedback worked after the interaction, it could not affect the user’s conduct at any stage of the interaction process. As far as users’ performance is concerned, it is more fruitful to add haptic feedback at Stage 2 rather than at Stage 3.
6. GENERAL DISCUSSION 6.1 When to Apply Haptic Feedback to Touch Screens As noted in the introduction, we divided the process of interaction with GUIs into three stages. The two experiments we conducted were to investigate when the right times were for adding haptic feedback on touch screens. The first experiment compared adding haptic feedback at two points within Stage 2. The second experiment compared adding haptic feedback between Stage 2 and Stage 3. However we did not present any experiment related to Stage 1 due to the existence of studies at that stage. Here we bring together previous research with our own discussion of when to apply haptic feedback to touch screens at the three stages. Results are depicted in a schematic diagram in Figure 8.
580
V
FA
FB
Feedback
Figure 7. Mean task completion time in the tapping task, by feedback condition.
5.3.2 Error rate The error rate in the visual-only condition (mean = 13.2%) had no significant difference (p > .1) compared with the FA condition (mean = 12.0%). The FB condition (mean = 6.6%), however, dramatically eliminated 50% of errors caused in the visual-only condition and 45% errors caused in the FA condition. The effect of feedback conditions on error rate was statistically significant (F2,22 = 19.3, p < .0001).
5.3.3 Throughput The FB condition also yielded the highest throughput. The overall mean throughput for the FB condition was 4.64 bit/sec, which was 27.5% higher than the 3.64 bit/sec observed for the FA condition and 31.1% higher than the 3.54 bit/sec observed for the visualonly condition. Although there was a significant effect of feedback conditions on throughput (F2,22 = 19.1, p < .0001), the difference between the visual-only condition and the FA condition was not significant (p > .1).
5.4 Discussion In general, participants in the FB condition outperformed those in other feedback conditions. Haptic feedback in the FB condition worked in two ways. In the one, haptic feedback assured the participants that they had reached the target so that they were able to press on the target without spending time aiming at its center. This effect was identical to that in Experiment 1. In the other, haptic feedback prevented participants from moving their fingers over the target. The force feedback in the FB condition worked as a wall that fingers were “bounced off” as soon as they encountered the resistance. Then participants pressed on the target very close to the “wall” which was located at the center of the
Figure 8. Times to add haptic feedback to touch screens in the interaction process. The end of the arrow is connected to its start to represent successive interaction processes. Stage 1: Locating. The earlier the better. Haptic feedback is useful at this stage since it helps the user locate the GUI. In this case, haptic feedback functions as a reminder to the user that he needs to react to a coming event, such as a new email, an instant message or the completion of a program. As soon as the haptic feedback alerts the user, he interacts with the particular GUI which was mapped with the event. In this way, haptics eliminates the time that the user has to spend locating the GUI. This effect is similar to that of a twinkling icon which reminds the user of an instant message. The earlier the reminder appears, the sooner the user will react to the event. So we suggest adding haptic feedback at the beginning of the interaction process (Figure 8).
The haptic feedback in Stage 1 is particularly useful in mobile devices, since the screen space is too limited to display all the information at the same time. In addition, in mobile environments, users have to concentrate on walking and navigation and thus have no time to check their mobile devices. Previous research [79] has reported that haptic enhanced progress bars on mobile devices have improved users’ response time and are generally preferred. This evidence proves that adding haptic feedback at Stage 1 can improve users’ performance.
feedback to normal buttons in the tapping experiment (Experiment 2). Similar results have also been reported in number entering tests [9, 19] and Fitts’ tapping experiment [16, 20] on touch screens. By contrast, adding the haptic wall before the participant tapped on the target improved participants’ performance by reducing both the task completion time and the error rate. Thus we can conclude that haptic feedback should be added before the interaction, or it will have no effect (Figure 8, Effective stages).
Stage 2: Navigation. The later the better.
The above conclusion is deduced for a single interaction process. However, in real Human-Computer environments, more than one interaction happens successively. The end of one interaction may become the start of the next. Therefore haptic feedback after the first interaction may become a reminder of the start of the next (Figure 8). In this case, we should attribute the improvement from adding haptic feedback to Stage 1 rather than to Stage 3. For example, in our tapping experiment, participants operated directly on the screen so that they were able to see exactly where they tapping. Visual feedback dominated the task to locate the target. Haptic feedback thus had no effect. By contrast, in Forlines et al.’s experiment [20], when visual feedback was unable to remind users about the location of the target (e.g. losing the system pointer and the hand or pen’s occlusion of the target), haptic feedback was helpful in “providing a confirmation of the selection without the need for visual attention” [20]. Such a confirmation prompted participants to complete the next task (e.g. to interact with the alterative stripe) and thus improved users’ performance.
Adding haptic feedback can also improve users’ performance at this stage. During the navigation towards the target, haptics helps the user to move his input interfaces in the quickest way and reach at his targets accurately. For instance, haptic tunnels that connected the target and the original position of the input interface prevented the user deviating from the tunnel and helped participants to reach the target in the quickest way. This effect decreased the movement time [11, 12] and error rate [10] in steering tasks. In our experiments, adding haptic feedback at the edge and center of the target increased the task completion speed by 7.9% and 9.8% respectively. In Experiment 2, the haptic wall at the center of the target not only reduced the task completion time but also eliminated 50% of errors caused in the visual-only condition. The results of Experiment 1 proved that adding haptics feedback at Stage 2 can improve users’ efficiency. Moreover, the results also demonstrated that adding haptic feedback at the center was slightly better than at the edge of the target. Haptic feedback’s effect of reducing the task completion time was more noticeable on the 20 mm width target than on smaller ones. In fact, it should be noted that the edge and center represent the beginning and the expected destination of users’ movements in the target. Therefore the results of Experiment 1 should be interpreted as: adding haptic feedback at the expected destination was better than adding haptic feedback at the start of the target. In the experiment, the users’ expected destination coincided with the center of the stripe since the target widths were smaller than or similar to the user’s index finger. In larger targets, however, the expected destination may vary and the effect of haptic feedback may also be weakened. In Experiment 1, we did not compare adding haptic feedback outside the target and inside the target. There were two reasons for this. First, adding haptic feedback outside the target such as the haptic tunnel [10-12] requires the computer to be aware of the particular target in advance. This situation seldom occurs. By contrast, it is more practical to add haptic after the user contacts the targets (i.e. GUIs). Secondly, following the idea of adding haptic feedback inside the target, we compare the effect of adding identical haptic feedback but at different times in order to find the optimal time to apply haptic feedback on the target. So, “adding haptic feedback the later the better in Stage 2” does not mean adding feedback inside the target is better than adding it outside the target. It means that adding it at the user’s expected destination is more practical than adding it ahead of the target; it is also better than adding it at the start of the target. Therefore we recommend adding haptic feedback at the user’s expected destination on the target (Figure 8). Stage 3: Interaction. Never! Haptic feedback will not improve users’ performance after an interaction. There was no effect after we added button-click
6.2 Feedback Effect vs. Feedback Time Besides the time, the effect of haptic feedback also plays a significant role in affecting users’ performance. Adding haptic feedback at the right time does not mean adding any feedback will be successful. For example, the haptic well [15] (or tunnel [11]) which trapped the user in it to avoid slipping off from GUIs led to frustration when users overcame the force of the wall in order to exit the tunnel intentionally, and thus cause delay in completing the task [15] even when they acted in Stage 2. In our experiments, the haptic bump in Experiment 1 only reminded participants that they needed to stop. By contrast, the haptic wall in Experiment 2 helped participants to stop at their destination. As a result, the haptic wall outperformed the haptic bump by not only increasing the speed but also reducing the error rate. The above examples prove that the haptic effect should support rather than oppose the intent of the user [5]. Then adding the proper haptic feedback at the right time will further enhance users’ performance.
7. CONCLUSION AND FUTURE WORK We present two experiments to explore when to add haptic feedback during the user’s interaction. The first experiment compared adding the same haptic feedback at the edge and the center of the target respectively. The haptic feedback at the edge and the center of the target improved the task completion speed by 7.9% and 9.8% respectively. Haptic feedback at the center slightly outperformed that at the edge. This effect was more noticeable for 20 mm wide targets than for smaller targets. The second experiment compared adding haptic feedback before and after an interaction. Results showed that adding haptic feedback before the interaction increased the task completion speed and eliminated 50% of errors. Haptic feedback after the interaction, however, had no such effect. In combination with previous research, we conclude that the optimal time to apply haptic feedback to the
target GUI is when the user reaches his expected destination. A haptic feedback is useful only if it is added before an interaction. The present research is based on two most basic and frequentlyused interaction techniques – tapping and dragging. We will continue to investigate the optimal time to add haptic feedback in more complex and realistic tasks on touch screens.
8. REFERENCES [1] Levin, M. and Woo, A. 2009. Tactile-feedback solutions for an enhanced user experience. Information Display, 25 (10). 18-21. [2] Banter, B. 2010. Touch screens and touch surfaces are enriched by haptic force-feedback. Information Display, 26 (3). 26-30. [3] Buxton, W., Hill, R. and Rowley, P. 1985. Issues and techniques in touch-sensitive tablet input. In SIGGRAPH Comput. Graph. 19 (3). 215-224. [4] Brewster, S., Chohan, F. and Brown, L.2007. Tactile feedback for mobile interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, CA, United states, April 28–May 3, 2007), CHI '07. ACM, New York, NY, 159-162. [5] Alison, I.O., Adams, A., Brewster, S. and Gray, P. 2002. Guidelines for the Design of Haptic Widgets. In Proceedings of British HCI 2002. British Computer Society, 195-212. [6] Yatani, K. and Truong, K.N.2009. SemFeel: a user interface with semantic tactile feedback for mobile touch-screen devices. In Proceedings of the 22nd annual ACM symposium on User interface software and technology (Victoria, British Columbia, Canada October 4–7, 2009). UIST '09. ACM, New York, NY, 111-120. [7] Brewster, S.A. and King, A. 2005. The design and evaluation of a vibrotactile progress bar. In Proceedings of World Haptics 2005. 499-500. [8] Hoggan, E., Anwar, S. and Brewster, S.A. 2007. Mobile multi-actuator tactile displays. In Proceedings of the 2nd international conference on Haptic and audio interaction design (Seoul, South Korea, 2007), Springer-Verlag, 22-33. [9] Leung, R., MacLean, K., Bertelsen, M.B. and Saubhasik, M. 2007. Evaluation of haptically augmented touchscreen gui elements under cognitive load. In Proceedings of the 9th international conference on Multimodal interfaces (Nagoya, Aichi, Japan, November 12–15, 2007), ICMI’07, ACM, New York, NY, 374-381. [10] Sun, M., Ren, X. and Cao, X. 2011. Effects of Multimodal Error Feedback on Human Performance in Steering Tasks. Information and Media Technologies, 6 (1). 193-201. [11] Dennerlein, J.T., Martin, D.B. and Hasser, C. 2000. Forcefeedback improves performance for steering and combined steering-targeting tasks. In Proceedings of the SIGCHI conference on Human factors in computing systems (The Hague, The Netherlands, April 01 - 06, 2000). CHI '00. ACM, New York, NY, 423-429. [12] Campbell, C.S., Zhai, S., May, K.W. and Maglio, P.P. 1999. What You Feel Must Be What You See: Adding Tactile Feedback to the Trackpoint. In Proceedings of 7th IFIP Conference on Human Computer Interaction. INTERACT'99, IOS Press, 383-390.
[13] Pakkanen, T., Raisamo, R., Raisamo, J., Salminen, K. and Surakka, V. 2010. Comparison of three designs for haptic button edges on touchscreens. In Proceedings of Haptics Symposium 2010, IEEE, 219-225. [14] Hoggan, E., Brewster, S.A. and Johnston, J. 2008. Investigating the effectiveness of tactile feedback for mobile touchscreens. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 5–10, 2008). CHI '08. ACM, New York, NY, 1573-1582. [15] Oakley, I., McGee, M.R., Brewster, S. and Gray, P. 2000. Putting the feel in "look and feel". In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (The Hague, The Netherlands, April 01 - 06, 2000). CHI '00. ACM, New York, NY, 415-422. [16] Poupyrev, I., Okabe, M. and Maruyama, S. 2004. Haptic feedback for pen computing: directions and strategies. In CHI '04 extended abstracts on Human factors in computing systems (Vienna, Austria, April 24–29, 2004), ACM, New York, NY, 1309-1312. [17] Poupyrev, I., Maruyama, S. and Rekimoto, J. 2002. Ambient touch: designing tactile interfaces for handheld devices. In Proceedings of the 15th Annual ACM Symposium on User Interface Software and Technology (Paris, France, October 27-30, 2002). UIST '03. ACM, New York, NY, 51-60. [18] Akamatsu, M., Mackenzie, I.S. and Hasbroucq, T.1995.A comparison of tactile, auditory, and visual feedback in a pointing task using a mouse-type device. Ergonomics, 38 (4). 816-827. [19] Koskinen, E., Kaaresoja, T. and Laitinen, P. 2008. Feel-good touch: finding the most pleasant tactile feedback for a mobile touch screen button. In Proceedings of the 10th international conference on Multimodal interfaces (Chania, Crete, Greece, October 20–22, 2008), ICMI’08, ACM, New York, 297-304. [20] Forlines, C. and Balakrishnan, R. 2008. Evaluating tactile feedback and direct vs. indirect stylus input in pointing and crossing selection tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 5–10, 2008). CHI '08. ACM, New York, NY, 1563-1572. [21] Gosselin, C., Poulin, R. and Laurendeau, D.2008. A planar parallel 3-dof cable-driven haptic interface. In Proceedings of the the12nd World Multi-Conference on Systemics, Cybernetics and Informatics (Orlando, Florida, USA, 2008) WM-SCI '08. 266-271. [22] ISO. Ergonomic requirements for office work with visual display terminals (VDTs), Part 9: Requirements for nonkeyboard input devices. 2000. [23] Farzan, S., I. Scott, M. and Stacey, D.S.2009. Evaluation of Mouse and Touch Input for a Tabletop Display Using Fitts' Reciprocal Tapping Task. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 839-843. [24] Touchsense Tactile Feedback Overview, Immersion Corporation. [25] Tashiro, K., Shiokawa, Y., Aono, T. and Maeno, T. 2009. A Virtual Button with Tactile Feedback Using Ultrasonic Vibration. Virtual and Mixed Reality, LNCS 5622, 385–393