Oct 19, 2011 - we implemented a prototype touch screen device that can sense the normal and .... Normal force is calculated by adding the sensor values of the four sensors at .... Apple iBooks application (Figure 2(b)). A slide gesture was.
Paper Session: Pointing
UIST’11, October 16–19, 2011, Santa Barbara, CA, USA
Force Gestures: Augmenting Touch Screen Gestures with Normal and Tangential Forces Seongkook Heo and Geehyuk Lee Department of Computer Science, KAIST Daejeon, 305-701, South Korea {leodic, geehyuk}@gmail.com damental solution will be to let a touch screen sense physical properties of a finger gesture beyond touch movements and discern the differences among similar gestures.
ABSTRACT
Force gestures are touch screen gestures augmented by the normal and tangential forces on the screen. In order to study the feasibility of the force gestures on a mobile touch screen, we implemented a prototype touch screen device that can sense the normal and tangential forces of a touch gesture on the screen. We also designed two example applications, a web browser and an e-book reader, that utilize the force gestures for their primary actions. We conducted a user study with the prototype and the applications to study the characteristics of the force gestures and the effectiveness of their mapping to the primary actions. In the user study we could also discover interesting usability issues and collect useful user feedback about the force gestures and their mapping to GUI actions. presentation]: User Interfaces. - Input devices and strategies.
We explored new possibilities with a touch screen that can sense not only touch positions but also the touch force. In particular, we focused on the tangential component of the force, as common gestures such as underlining, pageturning, and multi-page turning gestures are differ in terms of the tangential component of the touch force. We implemented a prototype device that can sense the normal and the tangential component of the forces on the screen, designed „force gestures‟ which differ in terms of both touch movements and force patterns, and conducted an experiment to verify the feasibility of this approach. Finally, we compiled strategies to map force gestures to different actions, implemented two force gesture applications, and evaluated the force gesture mappings by a user study.
General terms: Design, Human Factors
RELATED WORK
Keywords: Touch input, force input, finger properties
Numerous studies have attempted to augment input actions by sensing additional physical properties such as force. Harrison et al. [5] added pressure sensors on a mobile device to detect flicking and grasping gestures. The pressure sensors were also used for handedness detection. Cechanowicz et al. [2] used force sensors to extend the input of a mouse and studied usability issues through experiments. They also suggested design recommendations based on the results of their experiments. Ramos et al. [11] undertook experiments to investigate the use of force with different types of visual feedback, finding that continuous visual feedback is necessary for a pressure widget. Ramos and Balakrishnan [10] presented a method of controlling a precision parameter with force, also presenting pressure marks, which augment pen gestures using force.
ACM Classification: H5.2 [Information interfaces and
INTRODUCTION
A touch screen allows users to manipulate objects on the screen directly with their fingers. Many touch screen gestures such as a swipe, a flick, or a pinch are intuitive as they are based on real-world gestures. However, a touch screen usually detects the position of touch points only, and ignores other physical properties of a finger gesture. The same finger movement on an object in the real world may be associated with different intentions depending on its normal and tangential forces. For instance, the same finger movement can be intended to turn a single page or to turn multiple pages, or to slide on a page depending on its normal and tangential forces. However, the touch screen cannot differentiate these gestures. A solution to deal with these types of gesture conflicts is to use modes, but this strategy usually complicates the user interface. Another solution is to use different numbers of fingers [1], but this is not the way we manipulate objects in the real world. A more fun-
The force-sensing touch screen has a long history. In 1978, Herot and Weinzapfel [6] presented an interface that can sense the force vector applied to a screen via strain gauges around the touch surface. Minsky [7] implemented a system to sense force vectors and presented scenarios in which single finger gestures were detected by a screen. For multitouch table settings, Davison and Han [4] implemented an object arrangement method using the pressure of the touch point. The pressure was estimated from the size of the touch area. Vision-based multi-touch tables can measure the shape of a touch area, and there have been attempts to ob-
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. UIST’11, October 16–19, 2011, Santa Barbara, CA, USA. Copyright © 2011 ACM 978-1-4503-0716-1/11/10... $10.00.
621
Paper Session: Pointing
UIST’11, October 16–19, 2011, Santa Barbara, CA, USA
tain the orientation of a finger [14] and to differentiate two different types of touch [15]. The force-sensing mobile touch screen has a relatively short history. Clarkson et al. [3] investigated the possibility of using pressure as input for a mobile device. Miyaki and Rekimoto [8] added a force-sensitive resistor to a mobile device and proposed a zooming and scrolling method controlled by pressure. Research In Motion [12] released Blackberry Storm 2, which has four FSRs under the touch screen and used pressure information to distinguish a touch and a click. Roudaut et al. [13] proposed a method that enriched touch input without the use of additional sensors by distinguishing a touch input according to the movement patterns of touch points. While a force-sensing touch screen has a long history and force sensing on a mobile touch screen has been studied, no research has investigated the use of tangential force on a mobile touch screen, which is the main focus of our paper.
Figure 1: The prototype: (a) a cross-sectional view to show the sensing structure, (b) the top view of the prototype, and (c) the sensing frame.
IMPLEMENTATION Prototype Device
We implemented a prototype touch screen that can sense normal and tangential forces. Before we arrive at the current prototype, we had to experience many trials and errors. At first, we took apart an iPod Touch and attempted to install sensors under the touch panel, but were never successful. The next strategy that we chose was to use the device as it is and add a structure around it for sensing. We made a frame that is basically in the same form as the one shown in Figure 1, and attached 11 force sensors under and by the device to sense normal and tangential forces applied on the screen. The main problem of this prototype was that it was not sensitive enough to detect small shear force by natural touch gestures. Also, the iPod Touch would be stuck on or between the sensors intermittently. These problems were solved by the structure shown in Figure 1.
The final prototype is in fact a result of much trial and error. The sensing structures given in Herot and Weinzapfel [6] or in Minsky [7] were not a suitable example for a small mobile touchscreen. The structure in Figure 1 was finally sensitive enough to sense the small shear force rendered by a finger slide on a slippery touch screen (approximately 0.3 N). We refrained from a hasty user study with an unsatisfactory prototype because we knew that a poor prototype is often the end of a new idea. We might have given up the current study discouraged by negative user feedback if we had enforced a user test with an immature prototype that required excessive force to the point of fatigue. Signal Processing
A microcontroller samples sensor values via a 10-bit A-D converter at a sampling rate of 30Hz, and sends the sampled sensor values to a PC, which then forwards the sensor values to the iPod Touch via a Wi-Fi connection. All subsequent signal processing is done by the iPod Touch, including the initial calibration. Owing to gravity, the sensor values change as the attitude of the prototype device changes. In order to remove the effect of gravity, the device stores the sensor values at the onset of a touch gesture and uses them as reference points for all subsequent sensor readings. The system calculates forces on the four side walls by adding the values from two sensors located on the walls. A 2D tangential force vector can be obtained by comparing the forces on the left/right walls and the top/bottom walls. Normal force is calculated by adding the sensor values of the four sensors at the bottom of the sensing frame. The touch input now has a touch position and a 2.5-dimensional force vector. (2.5-dimensional because the normal force is non-negative.)
The prototype consists of three parts: a frame with force sensors, an iPod Touch device, and a film plate to deliver a tangential force on the screen to sensors on the frame. The film plate has a thin acrylic frame that prevents the film from bending. The sensor frame has 12 sensors: four sensors on the bottom to measure normal force, and two sensors on each of the four sidewalls to measure the tangential forces. The sensor frame also incorporates a circuit board with a PIC18F4620 microcontroller with a 10-bit A-D converter. When a finger touches the screen, normal force is delivered to the four pressure sensors on the bottom of the frame through the iPod Touch while tangential force is delivered to side sensors through the film plate. We had to pay special attention to the friction between the film plate and the sensors on the frame and to the friction between the film plate and the device. To reduce this friction, we attached a Teflon tapes (X-Ray Zerofriction) onto the four edges of the iPod screen and on the force sensors.
622
Paper Session: Pointing
UIST’11, October 16–19, 2011, Santa Barbara, CA, USA
FORCE GESTURES Designing Force Gestures
of actions and gestures requiring more force with actions at the higher hierarchy.
A touch gesture is defined by a series of touch points. We designed force gestures by augmenting touch gestures by adding the force properties of the tangential force and the normal force. The set of force gestures that we designed is summarized in Table 1. The two basic touch gestures, a tap and a slide, are augmented to the five force gestures, a tap, a press, a pivot, a slide, and a drag. A press gesture is similar to a tap gesture but should be performed with a stronger normal force. A drag gesture is similar to a slide gesture but should be performed with a stronger tangential force. A pivot gesture is similar to a drag gesture but should be performed without slippage. Note in Table 1 that the normal force condition for a drag gesture is yes/no, which means that a drag gesture may or may not be accompanied by a strong normal force. A strong tangential force, which is the main feature of a drag gesture, may be due to a strong normal force or due to a high friction coefficient since the tangential force is given by F = μN where μ is the coefficient of friction between a finger and the screen and N denotes the normal force on the screen. In other words, a drag gesture may be performed with a large normal force and/or with a high friction, e.g., by using the finger pad rather than the fingertip.
Many GUI actions have been designed with physical metaphors. For example, the slide gesture used in an e-book reader application comes from the position movement of turning the pages of a book. Navigation through chapters has a heavier metaphor. When there are series of actions with different metaphor masses, we can map force gestures for their force properties. Force Gesture Applications
We implemented two applications, a web browser and an ebook reader, to explore the properties of force gestures. The gesture mappings for these applications are shown in Table 1. Web Browser Application
The web browser application supports multiple tabs (Figure 2(a)). We mapped the force gestures to actions for the web browser using both the hierarchical and the physical mass strategy. In the hierarchical GUI structure, the application frame containing multiple tabs is at the highest level, tabs are at the next level, and web pages are under the tabs. Thus, light force gestures, sliding and tapping, are mapped to scrolling a page and selecting a link, respectively, since they are actions within a web page. The pivot gesture is mapped to navigating backward and forward, which are higher level actions than actions within a web page. The drag gesture is a heavy gesture, and therefore we mapped it to left/right navigation between tabs. To give an immediate feedback about a gesture, simple animation of a tab movement was provided. Dragging upward or downward was mapped with a physical-mass strategy. A dragging gesture is heavier than a sliding gesture, and therefore we mapped dragging upward or downward to scrolling to the bottom or the top of the page. Adding to favorites can be done with a press gesture, which is a metaphor of pinning something up.
Mapping Force Gestures
We formulated two strategies to design the interaction mappings of force gestures based on the hierarchical structure of the GUI and the physical mass of the corresponding physical metaphor of action. Current GUI elements are structured in a hierarchical manner. For example, in the Cocoa framework, a window may contain a view of three panels with buttons and text fields. In this case, we can assume that the GUI component at the higher level of the hierarchy have more components and information compared to those at the lower level. Therefore, we can connect gestures with less force to the current level
E-book Reader Application
The e-book reader application was designed following the
Table 1: Force gestures: their icons, defining properties, and mappings to application actions.
623
Paper Session: Pointing
UIST’11, October 16–19, 2011, Santa Barbara, CA, USA
Figure 3: The mean error rates of the force gestures in the three blocks.
mobile devices equipped with a touch screen, while all others were experienced users. Experiment Settings
All experiments were done in a seated condition. Because every participant was right-handed, they were asked to hold the prototype device in their right hand and operate the touch screen with the thumb on that hand.
Figure 2: Screenshots of (a) the web browser and (b) the e-book application
Apple iBooks application (Figure 2(b)). A slide gesture was mapped to the action of turning a page, and a tap gesture was mapped to turning on or off an overlay menu. With this menu, users can add a bookmark, open the table of contents page, or navigate the e-book with a scroll bar. Force gestures are used to help users to navigate pages more efficiently. A drag gesture can be regarded as a slide gesture with stronger friction, i.e., it is like turning multiple pages. Thus, we mapped the drag gesture to the action of flipping multiple pages at a time. A visual feedback about the number of pages to be turned is provided at the edge of the page. Therefore, users can select the number of pages to turn, from one to five pages, by controlling force on the edge before dragging.
Feasibility Test
The first experiment was designed to test the basic feasibility of the prototype and the proposed force gestures. The experiment also aimed to discover the difficulties and error characteristics of the force gestures. Force gestures were recognized by a simple rule-based algorithm based on the attributes defined in Table 1. The classification rules were heuristically generated. Tasks and Procedure
At the beginning of the experiment, simple instructions and a demonstration of the force gestures were given to the participants. After receiving the instructions, participants were asked to use force gestures until they became feel comfortable with input gestures, which did not take more than 5 minutes. When they felt that they were ready for the experiment, they started it. The task was to perform force gestures by following prompts randomly displayed on the screen. The recognized result was also displayed on the screen. The experiment consisted of 900 trials (10 participants x 3 blocks x 30 trials).
Thumbing through the book can be done in many e-book applications with a slider control. However, in the real world, thumbing through a book can be regarded as a rate control action rather than a position control action. Therefore, we mapped the pivot gesture to thumbing action. Users can thumb through the book at a varying speed by controlling tangential force in the pivot gesture. The magnitude of the tangential force is displayed on the screen as the length of an indicator in the shape of a pentagon. Dragging downward and upward on a page was mapped to show and hide the table of contents. The press gesture was mapped to the action of adding a bookmark, as the press gesture was mapped to adding to favorites in the browser application.
Results
Figure 3 shows the mean error rate for the three blocks of the feasibility study. Although the experiment was conducted immediately after a short practice period, users were able to input force gestures for approximately 93.6% of tasks from the first block, and the mean error rate of all blocks was approximately 5.4%. A drag gesture was the most error-prone for all blocks, responsible for 36.7% of all errors. Among the drag errors, 66.7% were recognized as pivot gestures; these errors resulted when the participants failed to slide their finger far enough to trigger a drag gesture. The second most error-prone gesture was a press gestures, with 66.7% press gestures recognized as tap gestures. The third was a pivot gesture, which was often misrecognized as a press gestures. We expect that these three most common types of error may be reduced with a proper visual feedback as these errors often occur when a movement dis-
EVALUATION
Two experiments were conducted to test the feasibility of a prototype device and to study the usability properties of the force gestures. Participants
Ten participants were recruited for the experiments. Three participants were female and all participants were righthanded. The mean age of the participants was 25.3 years old. One of the participants had little experience in using
624
Paper Session: Pointing
UIST’11, October 16–19, 2011, Santa Barbara, CA, USA
tance, a normal force, or a tangential force failed to reach a threshold value. Evaluating Force Gestures
To discover the usability issues related to the force gestures, we conducted an experiment with two applications that use force gestures. Tasks and Procedure
The two applications of a web browser with tab browsing functionality and an e-book reader application were used for this task. After the feasibility test, participants were asked to use both applications. Instructions regarding the use of the applications with force gestures were given at the beginning of the experiment. While explaining the force gestures, we described the gestures figuratively. For the press gestures, mapped to the „add to favorites‟ and „add bookmark‟ actions, we instructed “press as though you are pinning something up.”, while for drag gestures, we instructed them to “drag as though you‟re dragging paper pages”.
Figure 4: The questionnaire results for the eight force gesture mappings (on a 5-point Likert scale)
For the web browser application, participants were asked to visit the news page at Google.com, a long page that requires scrolling to view. They were also asked to surf the web for about five minutes while using all of the available gesture hotkeys, as described in the previous section. With the ebook application, we demonstrated how to use force gestures and let the participants navigate a book freely with force gestures. As in the web browser application, they were asked to use all force gestures.
the backward/forward actions were activated by the same force gesture, thumbing through pages was the most preferable gesture mapping of all, whereas the backward/forward was the least. We deduced the reason for this from the questionnaire result and from the comments during the interview. Thumbing through pages was rated highest in terms of enjoyment; it also received the most positive feedback in the interview.
After the experiment, the participants completed a questionnaire with using a five-point Likert scale (1: strongly disagree, 5: strongly agree), answering about six questions about usability. These assessed the intuitiveness of the gestures, the ease of learning, the level of fatigue, the input difficulty, enjoyment, and their preference regarding the use of a force gesture over a conventional touch gestures. We also used interviews to discover possible usability issues, such as how a participant felt about the mapping direction of the force gestures, which was rated the most interesting gesture.
Turning multiple pages enables users to turn multiple pages at once while providing a method to choose the number of pages to turn. We observed that participants tended to choose the exact number of pages carefully and that they did not want to turn an arbitrary number of pages. We received many comments about this gesture. Participants stated, “I was stressed about the visual feedback showing the number of pages. The visual feedback urged me to select the exact number of pages.” Another subject reported, “I would not use this function. If I need to turn many pages, I would simply thumb through them all”.
Results
The questionnaire results for the eight force gesture mappings as used in the two applications (four gestures for each application) are shown in Figure 4. The four gesture mappings listed above were used in the web browser application and four below were used in the e-book reader application. The participants agreed that the gesture mappings were intuitive and easy to learn.
The press gesture was used in two applications, for “add to favorites” and “add bookmark,” which can be assumed to be similar in terms of the action required. However, the questionnaire result on fatigue showed a difference between the two actions; “add bookmark” was reported to be the gesture that led to the least fatigue, whereas “add to favorites” was rated relatively highly in this area. We could infer the reasoning behind comments such as, “I was worried when I added a page to my favorites because I would end up selecting a link if I failed to perform the press gesture. Going to an unwanted page is really annoying.” Hence, the possibility of triggering an unintended action made participants apply more force to the screen.
We noted that the two gesture mappings of backward/forward and thumbing through pages were relatively difficult to learn, as they were triggered by the pivot gesture. According to the responses to the questions about fatigue and input difficulty we found that these two gesture mappings were difficult to input and that they led to user fatigue. One interesting aspect was that although the thumbing and
625
Paper Session: Pointing
UIST’11, October 16–19, 2011, Santa Barbara, CA, USA
During the interviews, the participants were asked questions about the direction mappings for gestures. We received various comments related to these questions, and most answers reported “no problem with the direction,” but some of the participants expressed that they experienced confusion. Two participants reported that they were confused when they performed the drag up/down to scroll to bottom/top actions. One participant reported that he felt like he was holding a scroll bar, not the webpage itself. Two participants told us that they were confused about the tab change direction. Finally, one participant reported that the direction mapping for backward/forward actions was initially confusing.
currently investigating possible mappings between force gestures and GUI actions further with the goal of building a force gesture framework. ACKNOWLEDGMENTS
This work was in part supported by the IT R&D program of MKE/KEIT. [10039161, Core UI technologies for improving Smart TV UX] REFERENCES
1. Apple™, Magic Trackpad™ http://www.apple.com/magictrackpad/ 2. Cechanowicz, J., Irani, P. and Subramanian, S., Augmenting the mouse with pressure sensitive input. In Proc. CHI ’07, pp. 1385-1394.
DISCUSSION
Participants in the user study reported that the two actions, backward/forward navigation and thumbing through pages, were most difficult and caused the most fatigue. Note that both were actions associated with the pivot gesture. However, the user preferences of the two actions were markedly different. Backward/forward navigation was least preferred, while thumbing through pages was the most preferred. The backward/forward action may have been least preferred because it caused a lot of fatigue. Thumbing action, though it is basically the same as backward/forward navigation action in terms of gesture, may have been favored because it was rewarding, i.e., it enabled the control of the flipping rate, which was not possible before. Also, it may have been preferred because its conceptual mapping was adequate.
3. Clarkson, E., Patel, S., Pierce, J. and Abowd, G. Exploring Continuous Pressure Input for Mobile Phones. In GVU Technical Report; GIT-GVU-06-20, 2006.
For the drag gesture, participants felt more fatigue when it was used for the multiple page-turn action than when it was used for previous/next-tab action. This may be that the drag gesture for turning multiple pages involves multiple levels. In order to hold a certain amount of pages, participants had to press the screen with a strong force, resulting in stronger friction between the finger and the screen. In particular, they experienced difficulty while adjusting the number of pages that they wanted to hold.
7. Minsky, M. R. Manipulating simulated objects with real-world gestures using a force and position sensitive screen. In Proc. SIGGRAPH ’84, pp. 195-203.
We explained all participants how to use force gestures with a short demonstration. Unlike finger movements, finger forces may have been invisible to the participants, and this may have confused them. In the first implementation, the e-book application did not give a feedback on the amount of tangential force applied to the device, and, for this reason, participants in the pilot study commented that it was hard to perform a thumbing action. Later, we added a tangential force indicator on top of the screen, and participants reported that it was much easier to use than before. We expect that a continuous visual feedback about applied forces will help users learn force gestures more easily.
10. G. Ramos and R. Balakrishnan, Zliding: fluid zooming and sliding for high precision parameter manipulation. In Proc. UIST '05, pp. 143-152.
4. P. Davidson and J. Han, Extending 2D object arrangement with pressure-sensitive layering cues. In Proc. UIST ’08, pp. 87-90. 5. Harrison, B., et al., Squeeze Me, Hold Me, Tilt Me! An Exploration of Manipulative User Interfaces, In Proc. CHI ’98, pp. 17-14. 6. Herot, C. F. and Weinzapfel, G., One-point touch input of vector information for computer displays. In Proc. SIGGRAPH ’78, pp. 210-216.
8. Miyaki, T. and Rekimoto, J. GraspZoom: zooming and scrolling control model for single-handed mobile interaction. In Proc. MobileHCI ’09, pp. 1-4. 9. Ramos, G. A. and Balakrishnan, R. Pressure marks. In Proc. CHI ’07, pp. 1375-1394.
11. Ramos, G. A. et al. Pressure widgets. In Proc. CHI ’04, pp. 487-494. 12. Research In Motion™ SurePress™ Technology http://www.rim.com/products/surepress/index.shtml 13. Roudaut, A. et al. MicroRolls: expanding touch-screen input vocabulary by distinguishing rolls vs. slides of the thumb. In Proc. CHI ’09, pp. 927-936. 14. Wang, F. et al. Detecting and leveraging finger orientation for interaction with direct-touch surfaces. In Proc. UIST ’09, pp. 23-32.
CONCLUSION
We implemented a prototype touch screen device to detect force vectors and studied force gestures by designing two applications on the prototype. Through a user study, we verified the feasibility of force gestures and discovered valuable usability issues about using force gestures. We are
15. Wang, F. and Ren, X. Empirical evaluation for finger input properties in multi-touch interaction. In Proc. CHI ’09, pp. 1063-1.
626