Interfaces and Human Computer Interaction 2008

0 downloads 0 Views 426KB Size Report
hand movements and the computer recognition of gestures is realizable, .... that the left, the right or both hands are used in the gesture is important for recognition. ... Participants also used clapping or waving gestures to show power on. ..... Matsumiya, M. et al, 2000, An Immersive Modeling System for 3D Free-Form Design ...
IADIS International Conference Interfaces and Human Computer Interaction 2008

A STUDY ON INTUITIVE GESTURES TO CONTROL MULTIMEDIA APPLICATIONS Edit Varga, Jouke Verlinden, Otmar Klaas, Luuk Langenhoff, Diederik van der Steen and Jasper Verhagen Faculty of Industrial Design Engineering, Delft University of Technology Landbergstraat 15, 2628 CE Delft, The Netherlands

ABSTRACT Hand gesture recognition techniques have been studied for more than two decades. Several solutions have been developed, however, little attention has been paid on the human factors, e.g. the intuitiveness of the applied hand gestures. This study was inspired by the movie Minority Report, in which a gesture-based interface was presented to a large audience. In the movie, a video-browsing application was controlled by hand gestures. Nowadays the tracking of hand movements and the computer recognition of gestures is realizable, however, for a usable system it is essential to have an intuitive set of gestures. The system functions used in Minority Report were reverse engineered and a user study was conducted, in which participants were asked to express these functions by means of hand gestures. We were interested how people formulate gestures and whether we could find any pattern in these gestures. In particular, we focused on the types of gestures in order to study intuitiveness, and on the kinetic features to discover how they influence computer recognition. We found that there are typical gestures for each function, and these are not necessarily related to the technology people are used to. This result suggests that an intuitive set of gestures can be designed, which is not only usable in this specific application, but can be generalized for other purposes as well. Furthermore, directions are given for computer recognition of gestures regarding the number of hands used and the dimensions of the space where the gestures are formulated. KEYWORDS Hand gesture, gesture recognition, gesture control, intuitive user interfaces

1. INTRODUCTION Multimodal user interfaces have been in the focus of research for many years (Arangarasan and Phillips, 2002). Hand gesture- and motion processing (Kettebekov and Sharma, 2001), head- and eye movement (Tanriverdi and Jacob, 2000), speech (Wolf and Bugmann, 2006), facial expressions (Truong, 2007) and recently brain-control (Gnanayutham and George, 2007) have been studied in order to replace keyboard- and mouse-based interaction wherever it is necessary or makes the communication easier or richer. The center of attention has been on the hardware and software techniques which enable the collection of information from these modalities, and on their processing to arrive to a system which interprets user intention. Hand gestures are considered as intuitive input means, and have been studied in several applications, such as sign language recognition, conceptual design, robot control or control of multimedia applications. Inspired by the movie Minority Report, we focus on this latter application in this paper. Despite the acknowledged need of hand gestures as input means, little is known about usable hand gestures in specific applications (Lin and Imamiya, 2006). Obviously, the intuitiveness of a set of hand gestures is influenced by the purpose of usage. In most cases, gestures were selected based on their ease of computer recognition, disregarding human aspects, such as understandability or learnability. It is expected that when the user is forced to learn a new way of interaction, the learning process is as effortless as possible. We had two main research questions. Firstly, we were interested which gestures people find intuitive when controlling multimedia applications, and whether there are typical gestures for this purpose. Secondly, we wanted to know how these gestures influence the technique of recognition. Furthermore, we were also

3

ISBN: 978-972-8924-59-1 © 2008 IADIS

interested whether some guidelines can be given to aid the formalization of a gestural language for a specific task. This paper describes the design, the results and the evaluation of an experiment, in which participants were asked to formulate gestures which they find intuitive for selected commands of a multimedia application. The results of the experiment are analyzed according to the research questions, focusing on the types and on the kinetic features of the gestures.

2. RELATED WORK By studying the related work, we wanted to know the types of applications that benefit from the usage of a hand gesture- or motion interface. As far as the type of application is concerned, an important application is sign language recognition, easing the learning of the sign language (Brashear et al., 2006) or the communication of non-hearing and hearing people (Stein et al., 2007), or their interaction with computers (Vogler and Metaxas, 2004). Hand gestures and motions – as they are consciously and unconsciously used in everyday communication - are considered to be intuitive and natural, and therefore many researchers think it is a solution to replace the sometimes cumbersome keyboard- and mouse based communication. In fact, some researchers focus on replacing mouse functionalities by hand gestures (Ianizotto et al., 2001). As a step further, others concentrate on translating hand gestures to control robots (Malima et al., 2006). Similarly, hand gestures have also been found useful in controlling presentations (Ahn et al., 2004) or televisions (Freeman and Weissman, 1994). Besides other modalities, hand gestures and motions has been studied in creative shape design, either in forms of curves (Xu et al., 2000) or surfaces (Matsumiya et al., 2000). Verlinden’s Wizard of Oz experiment pointed out that when applied in combination with speech, gestures are useful to indicate location, orientation and sizes (Verlinden et al., 2001). However, in this experiment mainly primitive shapes were used, and Varga also proved the usefulness of gestures when expressing freeform shapes (Varga, 2007). A Hand Motion Language developed by Horvath was used in this research (Horváth et al., 2003). In medical applications, sometimes the use of computers would be necessary, but cannot be controlled by doctors. This is especially true in surgical situations, when sterilization is a crucial issue and surgeons must not touch the keyboard or the mouse. To solve this problem, Grange proposed hand gesture-based communication (Grange and Baur, 2006). Moreover, hand gestures are also applied in different areas of medicine, such as diagnosis (New et al., 2004) or treatment (Keskin et al., 2007). Another typical application of hand gesture recognition is animation or the usage of avatars (Neff et al., 2008). In these situations, the gesture of a real human is mapped to the gesture of a virtual character. An enormous amount of literature can be found on the technical aspects of computer-based gesture recognition. Some survey papers give good introduction to this topic for interested readers. Some of them approach the problem from a purely technological point of view (Watson, 1993) (LaViola, 1999). Others concentrated only on camera-based recognition (Pavlovic et al., 1997) or on contact methods (Hand, 1997). Varga (2004) gives a review based on the processing techniques of hand motions and gestures.

3. GESTURE CHARACTERISTICS 3.1 Categorization of Gesture Types Concerning the types of gestures, we applied the categorization by Lausberg (Lausberg et al., 2007), because after the first evaluation of the recorded gestures, it could have been already seen that it covers all of the types we found and differentiates the dynamic gestures in a proper manner. Here only a short explanation of those gesture types is given, which we came across during the evaluation of the experiment. Emblems are substitutes of words, have a specific meaning and are consciously used. Iconic gestures are used to show concrete physical items, by depicting, tracing or shaping a form. Ideographs or metaphoric gestures are used to explain or depict an abstract concept like a form, movement, trace, position or action. Pantomime gestures demonstrate an action, often referring to the use of an imagined object. In case of deictic gestures, finger(s)

4

IADIS International Conference Interfaces and Human Computer Interaction 2008

or hand(s) point to a visible or an invisible locus in the external space. Positioning gestures mark a place or a position in an imaginary space, often in relation to another position. Finally, batons are up-down or circular movements with downward accent, and tosses are short outward/inward or supination/pronation movements of the hand with outward accent. Batons and tosses are often repetitive. In this paper, human factors of recognition are studied by analyzing the types of gestures people use. Keeping in mind that we focus on a practical application, we were also interested in the technical aspects of gesture recognition. By looking at the kinetic features, we were curious how the gestures fit to the capabilities of current recognition technologies. Those features were analyzed that influence recognition, e.g. the space in which the gestures occurred, the shape of the hand or the fact that people use one or two hands to formulate a gesture.

3.2 Kinetic Features of Gestures By studying the kinetic features, we were interested how they influence the computer recognition of gestures. The study is based on the Laban Movement Analysis (LMA) (Bouchard and Badler, 2007), simplifying it only to those features, which are essential in terms of computer recognition. The first category of LMA is called body, and describes which body parts are moving and which parts are influenced by others. The fact that the left, the right or both hands are used in the gesture is important for recognition. The second category called effort explains more subtle characteristics of movement with respect to inner intention. We were interested whether the expressiveness of the gesture is held in the shape or the motion of the hand. The third category, shape, describes the configuration of the hand, and the fourth, space where the gestures take place.

4. EXPERIMENT ON INTUITIVE GESTURES The experiment was conducted with 14 participants, who were asked to express media control actions using hand gestures. The actions were borrowed from the movie Minority Report(Spielberg, 2002) and were the followings: pause, expand view, fast forward, put together, zoom in, no input, empty to side, enlarge frames, select – open - select multi – grab - relocate, combining selection, pause then play, select and move, rotate, zoom selection, opening tool and frame-by-frame, shut down, start video, turn on, rotate camera view, playing frames forward/backwards, snapshot, dragging, select tool, close document, browsing text and input symbols. All of our participants were university students from the field of industrial design, because they were the most accessible. Participants were video-recorded while showing the gestures. The actions were described in a verbal form, and one session took approximately 30 minutes. This paper reports on the analysis of a subset of the actions above. This subset is based on a basic dvd player application in mind, and contains the following items: turn on, turn off, selecting menu item/scene, starting video, stop/pause, play/resume, frame-by-frame playing backwards and forwards and fast forward. With this experiment we were interested (1) what types of gestures people apply in this specific context, (2) if people show the same or at least similar gestures for the same action, (3) if there is a pattern in the type of gestures a person uses, (4) if people apply the reverse gesture for reverse actions, (5) if people can abstract from the context by not using computer or other media technology related gestures and (6) if there is a pattern in terms of the kinetic features of the gestures.

5. RESULTS AND DISCUSSION 5.1 Gesture Types For the gestures power on and power off we found that pantomime, iconic gestures, metaphoric gestures and in one case emblem are used. A popular gesture is the imitation of opening/closing a book, which is a pantomime-type gesture (Figure 1). Participants also used clapping or waving gestures to show power on. These ones are considered as metaphoric gestures, because they generally indicate the concept of drawing

5

ISBN: 978-972-8924-59-1 © 2008 IADIS

attention. Some iconic gestures were applied to express power off, like drawing an X or showing the letter T. While the first one is commonly used in computer software to close a window, the second is mainly used during sport games to indicate that a person wishes to stop the game. In this case, half of the participants applied the reverse gesture for the reverse action.

Figure 1. Typical gestures for the (left) “power on” and (right) “power off” actions

Using reverse gestures for opposite actions is supposed to be intuitive and might be learnt easier. Pantomime gestures seemed to be the most appropriate for this purpose. The opening/closing book gesture is a good example of this. However, there were participants who applied one of these gestures, but used something else for its opposite. In case of metaphoric gestures it is hard to define or explain a reverse gesture. E.g. clapping or waving seems to be intuitive for drawing attention, but their reverse gesture would only express the concept of saying goodbye. An unambiguous connection between these gestures is impossible to be defined. Considering iconic gestures, reverse gestures might or might not exist, and none of the participants tried to formulate them. When a non-reverse gesture was used, there were different combinations in the types of the gestures, e.g. metaphoric-iconic or metaphoric-pantomime.

Figure 2. Typical gestures for the (left) “play” and (right) “pause” actions

For the select gesture deictic, metaphoric in combination with deictic and positioning gestures were used. Deictic gestures were used to point to a direction, either with one or two index fingers or with one or two hands. Positioning gestures drew a circular shape around the area of interest. Metaphoric gestures imitated to grab something in the air, and in this context it is assumed that at the same time they point to the direction of the item to be selected. For the play action metaphoric gestures, pantomime gestures and iconic gestures were applied. Metaphoric gestures expressed two different things. Ones were waves to the right indicating “go on”, others resembled double mouse-click. In this latter case there were two quick movements either with a whole hand or with an index finger, but it did not imitate the actual usage of a mouse on a table. It also resembled the usage of a touch screen. Tosses were also popular. In all of these cases, participants made a movement towards the imaginary screen (Figure 2 left). In some cases iconic gestures were used, drawing a triangle in the air, resembling the play symbol on different media equipment. Interesting pantomime gestures were the ones, in which participants imitated the usage of an old-style video-camera by making a rolling movement close to their heads. For the pause action metaphoric and iconic gestures were used. Metaphoric ones were the most popular, showing a flat hand towards the imaginary screen, like it is often used next to speech when people say “stop” or “enough”. It was similar to the tosses in case of the play action, but there were no movements towards the

6

IADIS International Conference Interfaces and Human Computer Interaction 2008

screen. Iconic gestures were also applied resembling the two parallel lines in the pause symbol used in multimedia applications. The lines were either drawn by two hands in the air or showed by the index and the middle finger of a hand (Figure 2 right). Considering that the opposite action of the play is stop, reverse gestures were found in case of the half of the participants. The reverse of drawing a triangle for the play action is clearly the indication of two parallel lines in the pause symbol. The reverse of the metaphoric gesture “go on” might be the gesture indicating “stop, enough”. This combination was used by several participants. In case of the fast forward and backward actions only metaphoric gestures were used. These were either waves to the right and left or circular motions clockwise and counterclockwise, accordingly. These gestures simply expressed the concept of motion. It is kind of obvious that in this case always the reverse gesture was used, which indicated the direction of the motion in the movie. Finally, the frame to frame action was expressed only by metaphoric gestures, like in the previous case, indicating motion in the movie. Mainly a horizontal movement of one hand was applied, resembling the usage of a scrollbar. In other cases, a circular motion was used. Again, the direction of the motion indicates if the person wants to step back or step forward in the movie. There was no clear pattern found considering the type(s) of gestures a person uses. There was one participant who clearly preferred iconic gestures. Three participants used mainly pantomime gestures, two participants used mainly metaphoric gestures, and the others used different combinations. There were some computer/media technology related gestures, but this was not the main drive. Examples are drawing X to power off the system, a gesture like pulling out the cable from the wall, drawing a triangle as a play symbol, gesture like clicking on a remote control or like operating an old-style video camera. More popular ones were motions like clicking with a mouse, drawing two parallel lines as a pause symbol, and in many cases the movements of the participants resembled using a scrollbar.

5.2 Kinetic Features of Gestures Out of the 14 participants 13 preferred to use his or her right hand, and one participant used both of his hands more often. All in all, out of 100 video clips 69 showed the usage of the right hand, 24 the usage of both hands and 5 the usage of the left hand. Table 1. Body features of the different command Action Power on Power off Select Play Pause Fast forward Frame by frame

One-handed 6 7 13 12 12 12 12

Two-handed 8 8 1 2 2 2 1

Table 1 shows the usage of the hands with regards to the different actions. The reason for not having exactly 14 results in a row is the fact that some participants formulated multiple gestures for one action and some data were missing. While in most cases of the power on and power off gestures both hands were used, other gestures clearly showed the significant usage of the right hand. This is important from recognition point of view, because even the number of the hands used can provide information for the recognition software, making the recognition easier.

7

ISBN: 978-972-8924-59-1 © 2008 IADIS

Figure 3. (left) Partitioning of gesture space, (right) usage of gesture space for all participants

With regards to the shape of the hand, nine different shapes were found. This is not considered to be a large number for computer-based gesture recognition. Considering that the drive of the gestures is the shape of the hand or of its motion, the followings were found. 11 participants expressed information by motion and the remaining 3 by the shape of the hand. Considering that a hand shape based recognition technique requires different hand shapes for each action, we found that 2 participants fulfill this criterion completely, but in the remaining one case the participant only used two different hand shapes. When the motion was the essence of the gesture, in 7 cases only the motion differed and a small number of hand shapes were used. In 4 cases, both the motion and the hand shape showed significant differences comparing the actions. This is actually an ideal situation because of the information intensity the gestures contain. It is worth mentioning that when analyzing the shape of the hand, only the configuration of the fingers was considered. Additional information for the recognition software can be the orientation of the hand. The relevance of this information depends on the recognition technique.

Figure 4. Intensity of usage of the gesture space for each participant combining all gestures

The space of the gestures was analyzed similar to McNeill’s interpretation of gesture space (McNeill, 1992). The difference is that we had to apply a space partition for standing people instead of sitting ones. Our gesture space was divided into nine partitions as shown in Figure 3 (left).Tthe center-center partition is limited by the position of the shoulders and the two side extremities of the pelvis. This partitioning technique

8

IADIS International Conference Interfaces and Human Computer Interaction 2008

could be applied because these body parts of the participants only minimally moved, except in some individual cases. By analyzing the space of the gestures we were interested which partition(s) the participants used, and we wanted to know the range of their motions, which is actually defined by the number of partitions used. Besides having a full picture of all gestures in terms of space and range, it was also interesting to see if there is a typical pattern for one person regarding these features. Figure 4 shows the usage of the space by each participant, and Figure 3 (right) shows the accumulated result of all participants. The intensity of the partition used is showed by the color of the partition. The darker it is, the more often it was used. Figure 4 shows that in half of the cases there was a single partition that was used more extensively than others. In the other half of the cases, 2-4 partitions were used to the same extent. Considering the range of motions, 8 participants used 2 partitions, 4 participants 1 partition and 2 participants 3 partitions. This was calculated by averaging the ranges of all actions for each person. Summarizing the results into a single image, it can be seen that the most popular partition was the center-center, the second is the upper-center, and that the lower partitions were almost never used.

6. CONCLUSIONS In this paper we studied the intuitiveness of gestures to control a media application. This was done by analyzing the types of gestures participants used in our experiment. In considering recognition technologies, the kinetic features of the gestures were also examined. The investigation on gesture types indicated a clear pattern, where a position or direction was expressed (select, fast forward, frame to frame). In these cases, metaphoric gestures in combination with deictic or positioning gestures emerged. For other controls, gesture types showed a more diverse pattern. Participants applied pantomime, iconic, metaphoric gestures and tosses extensively. There were always typical gestures for actions showed by different participants, but there was no pattern for a person regarding the types of gestures. Reverse gestures for opposite actions proved to be intuitive, not only in those cases where they expressed direction, but in more general cases as well. Although the experiment focused on a specific application, participants rarely used computer- or dvd player related gestures. It suggests that these gestures can be incorporated in other applications as well, where similar actions are needed. The focus on kinetic features of gestures revealed some patterns were. People preferred to use a single hand; most probably the dominant hand. This is ideal from a automatic recognition point of view, because one hand is easier to be tracked than two, and there are no occlusion problems. Nine different hand shapes could be identified throughout the whole experiment recordings. Considering posture recognition, it is not a large number, yet the fact that the similarity of the postures also influences their recognition. Participants either expressed gestures by hand motions or by hand shapes. In either case, a clear distinction of motions or shapes was found for a person, which enables recognition. There were also situations where both motion and shape were used to establish a gesture. This offers additional information for recognition that can be used to disambiguate the interpretation process. Most gesture and motion tracking systems are adjustable to different target locations, but are limited in the range of motion. In the experiment, the average range of gestures covered just two space partitions; for the most part the center-center and upper-center. This coverage allows an estimate of space for which the tracking system has to be calibrated.

REFERENCES Ahn, S. C. et al., 2004, Large Display Interaction using Video Avatar and Hand Gesture Recognition, Lecture Notes in Computer Science, Vol. 3212, pp. 261-268. Arangarasan, R. and Phillips, G. N. Jr., 2002, Modular Approach of Multimodal Integration in a Virtual Environment, Fourth IEEE International Conference on Multimodal Interfaces, Pittsburg, PA, USA, pp. 331-336. Bouchard, D. and Badler, N., 2007, Semantic Segmentation of Motion Capture Using Laban Movement Analysis, Lecture Notes in Computer Science, Vol. 4722, Springer, pp. 37-44. Brashear, H. et al., 2006, American Sign Language Recognition in Game Development for Death Children, Eighth International ACM SIGACCESS Conference on Computers & Accessibility, Portland, Oregon, USA, pp. 79-86.

9

ISBN: 978-972-8924-59-1 © 2008 IADIS

Freeman, W. T. and Weissman, C. D., 1994, Television Control by Hand Gestures, IEEE International Workshop on Automatic Face and Gesture Recognition, Zurich, Switzerland Gnanayutham, P. and George, J., 2007, Inclusive Design for Brain Body Interfaces, International Conference on HumanComputer Interaction, Beijing, China, pp. 103-112. Grange, S. and Baur, C., 2006, Robust Real-time 3D Detection of Obstructed Head and Hands in Indoor Environments, Journal of Multimedia, Vol. 1, No. 4, pp. 29-36. Hand, C., 1997, A Survey of 3D Interaction Techniques, Computer Graphics Forum, Vol. 16, No. 5, pp. 269-281. Horváth, I. Et al., 2003, Comprehending a Hand Motion Language in Shape Conceptualization, ASME Design Engineering Technical Conferences, Chicago, IL, USA, CD-ROM CIE-48286. Ianizotto, G. et al., 2001, Hand Tracking for Human-Computer Interaction with Graylevel VisualGlove: Turning Back to the Simple Way, Workshop on Perceptual User Interfaces, Orlando, Florida, USA Keskin, C. et al., 2007, A Multimodal 3D Healthcare Communication System, 3DTV Conference, Kos Island, Greece Kettebekov, S. and Sharma, R., 2001, Towards Natural Gesture/Speech Control of a Large Display, Eighth International Conference on Engineering for Human-Computer Interaction, Toronto, Canada, pp. 221-234. La Viola, J. J., 1999, A Survey of Hamd Posture and Gesture Recognition Techniques and Technologies, Research Report, Brown University, Department of Computer Science, CS-99-11, pp. 1-80. Lausberg, H. et al., 2007, Speech-Independent Production of Communicative Gestures: Evidence from Patients with Complete Callosal Disconnection, Neuropsychologia, Vol. 45, pp. 3092-3104. Lin, T. and Imamiya, A., 2006, Evaluating Usability Based on Multimodal Information: An Empirical Study, Eighth International Conference on Multimodal Interfaces, Banff, Alberta, Canada, pp. 364-371. Malima, A. et al., 2006, A Fast Algorithm for Vision-based Hand Gesture Recognition for Robot Control, IEEE Conference on Signal Processing and Communications Applications, Antalya, Turkey, pp. 1-4. Matsumiya, M. et al, 2000, An Immersive Modeling System for 3D Free-Form Design Using Implicit Surfaces, ACM Symposium on Virtual Reality Software and Technology, Seoul, Korea, pp. 67-74. McNeill, D., 1992 Hand and Mind, University of Chicago Press, Chicago, IL, USA Neff, M. et al., 2008, Gesture Modeling and Animation Based on a Probabilistic Re-creation of Speaker Style, ACM Transactions on Graphics, Vol. 27, Issue 1, Article No. 5. New, J. R. et al., 2004, Med-LIFE: A Diagnostic Aid for Medical Imagery, International Conference on Mathematics and Engineering Techniques in Medicine and Biological Sciences, Las Vegas, Nevada Pavlovic, V. et al., 1997, Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review, IEEE Transactions on Pattern Recognition and Machine Intelligence, Vol. 19, No. 7, pp. 677-695. Stein, D. et al., 2007, Hand in Hand: Automatic Sign Language to English Translation, Conference on Theoretical and Methodological Issues in Machine Translation, Skovde, Sweden Spielberg, S. 2002, Minority Report, 20th Century Fox. Tanriverdi, V. and Jacob, R. J. K., 2000, Interacting with Eye Movements in Virtual Environments, Conference on Computer-Human Interaction, New York, NY, USA, pp. 265-272. Truong, K. P. et al., 2007, Unobtrusive Multimodal Emotion Detction in Adaptive Interfaces: Speech and Facial Expressions, Twelfth International Conference on Human-Computer Interaction, Beijing, China, pp. 354-363. Varga, E. et al., 2004, Survey and Investigation of Hand Motion Processing Technologies for Compliance with Shape Conceptualization, ASME Design Engineering Technical Conferences, Salt Lake City, Utah, USA Varga, E., 2007, Using Hand Motions in Conceptual Shape Design: Theories, Methods and Tools, PhD dissertation, Delft University of Technology, Delft, The Netherlands, PrintPartners Ipskamp, pp 1-241. Verlinden, J. et al., 2001, Exploring Conceptual Design Using a Clay-based Wizard Of Oz Technique, ... Vogler, C. and Metaxas, D., 2004, Handshapes and Movements: Multiple Channel ASL Recognition, Lecture Notes in Artificial Intelligence, Vol. 2915, Springer, pp. 247-258. Watson, R., 1993, A Survey of Gesture Recognition Techniques, Technical Report, TCD-CS-93-11, Trinity College, Dublin, Ireland, pp. 1-23. Wolf, J. C., and Bugmann, G., 2006, Linking Speech and Gesture in Multimodal Instruction Systems, Fifteenth International IEEE Symposium on Robot and Human Interactive Communication, Hatfield, UK, pp. 141-144. Xu, S. et al., 2000, Rapid Creation and Direct Manipulation of Free-Form Curves Using Data Glove, Conference on Engineering Design and Automation, Orlando, Florida, USA, pp.954-959.

10