2011 IEEE/IPSJ International Symposium on Applications and the Internet
A Unified Method for Multiple Home Appliances Control through Static Finger Gestures Lei Jing1, Kaoru Yamagishi2, Junbo Wang1, Yinghui Zhou2, Tongjun Huang1, and Zixue Cheng1 1
School of Computer Science and Engineering Graduate School of Computer Science and Engineering University of Aizu Aizu-wakamatsu, Japan
[email protected]
2
Remote Controller (RC) based on infrared sensors has been dominant in electrical appliance market for decades. However, as the diversity of appliance incremented, people tend to feel inconvenient or even confusing to deal with multiple RCs. Thereby, the first generation universal controller was invented by Apple in 1980’s. But the incompatible command set between different companies has hinder the widely usage of the universal controller. Thus, it is important to find a unified and efficient way for controlling multiple appliances. The purpose of this paper is to propose and evaluate a novel remote control method which can perform the nature and unified control according to the finger-gestures. Two main problems have to be tackled to fulfill the above purpose. One is how to design a fundamental hardware and software structure for sensing-based control. Another is how to define a unified control protocol for the whole control process from input a gesture to select the target function. Major contributions of this paper are 1) presenting a wireless sensing pilot system to enable the finger gesture based remote control; 2) defining a switching protocol in application layer for selection among appliances; 3) The usability of the pilot system is studied through comparative experiment. The rest of paper is organized as follow. A survey of related work is given in Section II. Section III gives a threelayer hieratical network model. Section IV describes the structure of two primary devices used in the pilot evaluation system. Section V defines some static gestures of finger. Section VI gives the application layer protocol for target and functional selection. The experiment setting, results and analysis are presented in Section VII. The work is concluded in Section VIII.
Abstract— Various electrical appliances have penetrated into people’s daily life including televisions, video and audio equipment, and other household appliances. Many of them provide an infrared Remote Controller (RC) for appliance control. But as the number of controllers increased, it becomes not so convenient to access different controllers to control different devices, especially for the senior citizens with diminished physical and mental ability. In this paper, a one-for-all gesture-based remote control system is proposed by which consumers can control different appliances in a nature and unified way. Firstly, two kinds of fundamental devices for the physical layer of the novel control method is presented, which including a compact wearable sensing mote called Magic Ring (MR) and its command receiver and executer called Electrical Appliance Node (EA-Node). Moreover, a unified control protocol for application layer is defined to solve the problems of effective target selection and function navigation. A comparative study is performed between the proposed MR-based method and traditional RC-based method on three aspects: operation effectiveness, learning curve, and fatigue degree. The results of experiment show that MR-based method can achieve a competitive performance by as much as 88% of RC-based method ( in general for a beginner, the completed tasks using MR is about 40% as many as using RC. And after several minutes of practice, that ratio can raise to about 60%). Keywords- Internet of Things (IoT), network protocol, human factor, appliance control, finger gesture recognition, Magic Ring (MR).
I.
INTRODUCTION
A pervasive interconnected cyber-physical world is described in the vision of Internet of Things (IoT). Home appliances are indispensible things for people, which will act an important role in IoT. But it is a challenge to connect various appliances since networked appliances are mainly used for control instead of communication, which is different with the main purpose of the wide spread internet. Remote control is a component of electrical appliance, which can be used to operate the device wirelessly from short distance. The early wireless television remote control can be found in 1955, which was based on the photoelectric cell with the fundamental functions like turn television ‘on’ or ‘off’, change the channel, and mute the sound. Nowadays 978-0-7695-4423-6/11 $26.00 © 2011 IEEE DOI 10.1109/SAINT.2011.21
II.
RELATED WORK
Infrared wireless control is a dominant remote control method in that it can keep people stay comfortably in sofa and enjoy the service provided by various electrical appliances. But shortages are obvious as well including onedirection communication, one-for-one control, line-of-sight
82
control, and handheld control. Many researchers suggest that voice-based control should be very convenient since users can speak to appliances with no additional devices needed to take. But voice recognition has high requirement on both of surrounding environment and computing resources, which makes it difficult to be deployed on mini remote control. Gesture-based control, using vision or wearable sensors, is another important branch for remote control. Vision-based control is often based on CCD (Charge Coupled Device) and vision recognition, which is lightweight for the user since it requires no or only some marks to be attached onto the body [1]. But it is generally heavy for the system, which includes a set of devices to be deployed into target environment since in most of the cases the cameras need to be fixed in a location and difficult to be operated by users. Recently, some self-contained portable vision-based control system is proposed like sixth sense [2]. But it can only detect the two dimension static gestures since only one camera is contained and dynamic gestures require high computing resources. Unlike vision, wearable sensor based method requires relatively small computing resources even for the dynamic gestures. Moreover, it is possible to detect the threedimension nature gestures in daily life through welldesigned sensors combination and deployment [3]. ARC proposed a watch-type wearable remote control to recognize hand motions, and the concept of virtual menu to operate various kinds of consumer appliances in a unified way [4]. But it still request user to face to a display to navigate the menu. After a comparison between several kinds of technologies including infrared and wireless, voice and gesture, vision and wearable, the wireless wearable sensing method is adopted for the pilot implementation of the wireless appliance control system. Wearable control does not need to be held in hand, and can provide at-hand service with little accessing time. Moreover, wireless can provide two-way communication among multiple devices at the same time. More specifically, a wearable ring-type sensing mote called Magic Ring (MR) is adopted to detect the nature finger gestures. So far, various kinds of devices and methods are proposed to detect the finger gestures [5, 6]. But due to the limitation of the technologies at that time, the size of device is too big to be worn on the finger. Recently, development of several kinds of technologies, including MEMS sensor, system on chip, mini size Li Po battery, and wireless power transportation, have make it feasible to design and implement such kinds of small light-weight sensing mote with relatively low cost. III.
Lamp Receiver
EA Node
Wireless Communication Interface
EA Node
EA Node
MR
Radio Receiver
Receiver
Fan Figure 1. Model of Finger Gesture based Remote Control
Wireless Personal Area Network (WPAN). Multiple appliances together with the remote control like MR form a star topology as shown in Figure 1. The remote control is at the central acting as the master. And appliances are at the peripheral acting as slaves. To simplify the discussion, two assumptions are defined about the WPAN. First, MR can communicate with any target EA-Nodes within one hop; second, there is only one MR in the WPAN. Based on the above two assumptions, two-fold abstraction is introduced to define a unified control method. For convenience of one-for-all control, various kinds of appliances are abstracted as one type of device called EANode (Electrical Appliance Node). EA-Node, as one of the two primary elements of gesture-based wireless control, is a traditional electrical appliance augmented with Receiver, a component to decode and execute the control command. Another primary element is wearable wireless control, like MR, a controller worn on finger to identify, encode and send the control command. The two kinds of elements, MR and EA-Node, constitute of a foundation for gesture based one-for-all control. Furthermore, a unified wireless control interface between MR and EA-Nodes are defined to deal with the functional diversity of EA-Nodes. A three-layer wireless communication interface is defined as shown in Figure 2.
SYSTEM MODEL
The indoor network for appliances control is a typical
Figure 2 Three-layer wireless control interface
83
Figure 3. Model of Finger Gesture based Remote Control
The mapping relationship between the three-layer interface and standard OSI network architecture is shown as well. The bottom layer is physical layer which is responsible for package transferring and receiving through the RF waves. The middle Link Layer consists of two sub-layer: Media Access Control layer (MAC) and Logical Link Control layer (LLC). The top layer is application layer, which include target selection control (TGTSELT) and functional selection control (FUNSELT). TGTSELT defines the switching protocol between multiple appliances. And FUNSELT is responsible for command encoding and decoding for the selection of specific functions on a selected appliance. Network and transport layer are not included into the interface since they are not necessary needed in a single hop network. In the wireless control process, protocols of PHY layer and Data Link layer can adopt other existed protocols of WPAN like 804.15.4 [7] and Bluetooth [8]. Thus, the bottom and middle layer will not be discussed in this paper. But, the two special functions in the application layer including target selection and function selection will be discussed in more details. The two-fold abstraction on the component devices and software protocols is introduced as above. The design and implementation of the hardware devices will be discussed in section IV. Then the mechanism of TGTSELT and
Figure 4. Overall architecture of remote control system based on MR
FUNSELT will be given in section VI.
84
TABLE 1. DEFINITION OF CONTROL COMMAND ID
0x03
Right Rotate
0x04
Left Rotate
0x05
Max Up
0x06
Max Down
Lamp
Brightness up Change appliance (+) Change appliance (-)
Home appliance CD Radio TV Power on/off Power on/off CD Selection Change Channel Volume up Volume up Volume down Change appliance (+) Change appliance (-)
Volume down Change appliance (+) Change appliance (-)
the correct gesture recognition. The weight of MR is about 10 grams including the battery.
Figure 5. Name and posture of finger gestures
IV.
0x01 0x02
Gesture name Finger Up Finger Down
STRUCTURE OF MR AND EA-NODE
C. Hardware Structure of EA-Node EA-Node different from the normal electrical appliance in that it has a plug-in module-Receiver, to bridge the target electric appliance and MR. As shown in the upper half of Figure 4, the Receiver consists three major modules: a wireless transceiver; a processing module to decode and execute the according command; a IO control module customized for connecting and controlling of target electrical appliance. In our experiment, a wireless embedded platform called CuteBox is adopted as the Receiver. The detail information on CuteBox can be found here [9].
A. Design of Overall Architecture The overall architecture consists of two kinds of main components: MR and EA-Node. MR is a finger-gesturebased controller to identify, encode and send the control command. EA-Node is a normal electrical appliance augmented with Receiver, an interface to decode and execute the control command. A star network topology is adopted for one-for-all control since it is simple to add or remove the additional node. MR is the central node, and the control targets, EA-Nodes, are the peripheral nodes. Following, a specific implementation of MR and EANode are introduced in more details.
V.
STATIC FINGER GESTURES
Six static finger gestures are defined for the primitive experiment on the novel method of appliance control. The names of gestures and according postures of the index finger are shown in Figure 5. The accelerate sensor on MR is used as tilt sensor since it can detect the gravity. The vector of gravity consists of three components on the 3 axis of the accelerate sensor. The three components can be used to discriminate the six static gestures in a straightforward way. The detailed discussion on the gesture recognition method can be found in [10]. The same gesture can be interpreted to different meaning for different appliance. An example mapping relationship between gestures and three appliances, which is adopted in the evaluation experiment, are shown in Table 1
B. Hardware Structure of MR MR is used to recognize finger gestures and send control command. As shown in the lower half part of Figure 4, MR consists of five major modules: a sensor module (3D MEMS accelerometer MMA7361L and Real Time Clock (RTC)) to detect the finger gestures; a multimodal feedback module (Vibrator, Buzzer, LED) to adapt for different environment and consumer preference; a processing module (8bit 8051 core microprocessor CC1110F32, 32K ROM, 4K RAM, DMA, 12bit ADC) to perform the sensor data fusion, gesture recognition and wireless communication management; a wireless transceiver working at 433MHz for communication; a power regulation and charging module for the tiny size 3.7V lithium polymer rechargeable battery. To minimize the size of the device, these modules are deployed on 3 rigid PCB (shown in Figure 3. (a)Layout Design). The picture of one piece of PCB sheet is shown in Figure 3. (b) PCB Design. Moreover, the PCBs are assembled into a ring shape as shown in Figure 3. (c) Assembled MR, so that the relative position and orientation between the finger and accelerometer can be fixed to ensure
VI.
TARGET AND FUNCTION SELECTION
As mentioned above, one-for-all control consists of two steps: target selection and function selection. A. Protocol of Target Selection Target selection can be concluded as three kinds of methods: direct pointing method based on the directional electromagnetic wave such as infrared beam; direct
85
Max Up
① Connection Close
Lamp
Receiver
OK
EA Node Target ID 0x01
②TargetID + 1
Figure 7. Function tree of electrical appliance.
Target ID 0x02 EA Node
MR
specific function can be selected among the functions set using function selection protocol. Function selection can be concluded as two kinds of patterns: Full Function Selection (FFS) and Major Function Selection (MFS). FFS means organize all of the functions of an appliance into some structure so that all of them can be selected. MFS means provide the ways of major function selection like the hotkey. 1) Full Function Selection As shown in Figure 7, an appliance can be represented as a set of functions, and these functions form a tree structure according to the functional dependency relationship. Each node of tree is a function, which can be controlled by a pair of contrary command like on/off, faster/slower, bigger/smaller, etc. The root node is power on/off. Typically, 5 commands are enough for the function navigation and selection: two for vertical navigation between multiple layers, two for horizontal navigation between functions in same layer, and one for selection. FFS provide the ways to access all of the functions for consumer. But it has two disadvantages. First, it shows low efficiency as the depth and breadth of tree incremented. Second, it is not intuitive for operation, especially when the traditional display-based interface is not always accessible. 2) Major Function Selection From daily experience, the access frequency among different functions is extremely different, such as for a TV, channel selection and volume adjustment are used every day, but white balance adjustment (a function in each TV for color correction) is seldom touched. Thus, for different appliance, major functions can be selected according to the access frequency and meet most of the need. In rest of the paper, the MFS is adopted to evaluation the usability of novel appliance control method.
OK Receiver ③ Connection Establishment
Radio Figure 6. Protocol of target selection
triggering method based on the one-to-one mapping between command and control target like hotkeys; switching method based on a predefined turn-taking protocol. Among the three options, pointing method requires the target in line-of-sight (LoS) which contrary with eyes-free features of MR operation. Direct triggering method is most efficient, but it increases burden for consumers to remember more commands. Thus, it fits the situation with just a few targets, or combination with switching method to provide quick selection of most frequently used appliance like television and lamp. In our experiment, the switching method is adopted. Two commands are used for target switching Max Up and Max down as shown in Figure 6. Each gesture is uniquely identified in the system by a Gesture ID. Polling protocol is adopted to take turn. The MR is designated as the master node, which polls each of the EANodes in a round-robin fashion. When MR is power up or reset, ID of the current control target is null (Target ID 0x00). Once the target switching command is performed, MR will give the next turn to the neighbor node according to target list. In detail, process of target switching consists of three steps: close current connection, select next appliance, and establish the new connection. For example, the current control target is a lamp which Target ID is 0x01. When gesture “Max Up” is performed as shown in Figure 6, the Gesture ID (0x05) is send to the lamp first to close the current connection. Then MR selects the next Target ID on the target list to establish the new connection.
VII.
EVALUATION AND DISCUSSION
A. Evaluation Metrics A comparative study between the MR and RC was performed to evaluate several aspects of the novel MRbased appliance control method including operating effectiveness, learning curve, and fatigue.
B. Protocol of Function Selection Multiple functions provided by an appliance compose a function set like power on/off, volume up/down, switching channel, etc. After the selection of a control target, the
86
(a)
Evaluation system for RC (b) Evaluation system for MR Figure 8. Setting of the two evaluation systems for comparative experiment
TABLE 2. MODEL OF APPLIANCES USED IN RCES Appliance Model Lamp 23LED RC 66199 (Comolife) CD Radio ZS-E70 (SONY) Television TH-15L70 (Panasonic)
¾
¾ ¾
Operating effectiveness: the average number of tasks can be completed in a period of time using a controller. Learning curve: change of the average learning rate in a series of practice using a same controller. Fatigue: include both of the physical and mental aspects. Two matters should be clarified: one is whether fatigue easily; another is how long it is needed to recovery.
Figure 9. EA-Node simulator
two continuous sections. The first half sections (section RC1st to RC5th) and the second half sections (MR1st to MR5th) were assigned to use RC and MR respectively. At a time, only one participant is arranged to perform the experiment under the guide of an observer. At the starting of an experiment, the observer will explain the usage of the experiment systems. Then, during each section, the participant performs the tasks given by the observer one-byone. The log data including the task ID and amount of accomplished tasks are recorded by the observer, which are mainly used to evaluate the operation effectiveness and learning curve. Moreover, a questionnaire is filled out by the participant immediately after an experiment to collect the feedback from him/her. The six gestures are printed out on a piece of paper so that participant can look at the paper to help them remember the gestures. 2) Task Difficulty Control The amount of accomplished tasks in a fixed period of time is closely correlated with the difficulty of the tasks. The difficulty among every section should be controlled to ensure an objective experiment result. A random drawing method is designed for task difficulty control. First, the tasks are predefined and printed on the cards (1 task per card). Totally, the same 19 tasks are used for the evaluation experiment for both RC and MR as shown in Table 3. Second, the cards are grouped according to the difficulty level. The tasks are divided into three difficulty levels according to the steps for a given task as in Table 3. The
B. Experiment Systems Setup Two experiment systems were set up in the research laboratory to evaluate the performance of MR and RC respectively as shown in Figure 8. 1) Remote Control Evaluation System (RCES) RCES consists of three home appliances including a television, a CD radio, and a lamp as shown in Figure 8 (a). The models of these appliances are given in Table 2 for reference. 2) Magic Ring Evaluation System (MRES) MRES consists of several EA-Node simulators and one MR. A simulator is a dummy EA-Node, which consists of a CuteBox and a LED panel as shown in Figure 9. CuteBox, an embedded platform, is used for wireless command receiver, decoder, and executor [9]. LED panel can feedback the internal state of the dummy EA-Node to the user. C. Participant Six participants (male: 3, female: 3, age: (28±6)) took part into the evaluation experiments, who with no knowledge on MR before. D. Experiment Process 1) Process The whole experiment process consists of 10 sections (2minutes/section), and a one-minute break time between
87
TABLE 4. TASKS COMPLETED USING MR AND RC
TABLE 3. TASK LIST FOR EVALUATION EXPERIMENT Task ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Task contents Lamp⇒Change Brightness one time Lamp⇒Change Brightness two times Lamp⇒Change Brightness three times TV⇒Power ON/OFF TV⇒Change Channel one time TV⇒Change Channel two times TV⇒Change Channel three times TV⇒Change Volume one time TV⇒Change Volume two times TV⇒Change Volume three times TV⇒Change Volume four times Radio⇒Power ON/OFF Radio⇒Change Channel one time Radio⇒Change Channel two times Radio⇒Change Channel three times Radio⇒Change Volume one time Radio⇒Change Volume two times Radio⇒Change Volume three times Radio⇒Change Volume four times
Difficulty level Low Middle High Low Low Middle High Low Middle High High Low Low Middle High Low Middle High High
Sections 1st 2nd 3rd 4th 5th All
RC (mean±sd; round off to integer) 14±3 16±2 17±3 17±2 19±2 17±2
MR (mean±sd; round off to integer) 6±3 8±4 10±3 11±4 11±3 9±3
Ratio (MR/RC; %, round off to two decimal ) 43% 50% 59% 65% 58% 53%
control paradigm, and participants need longer time to familiar with it. However, from the 1st to 5th section, ratio of MR to RC is increased from 43% to 65%, which indicate that MR shows a competitive performance once users familiar with such a paradigm through practice. Moreover, the recognition accuracy of tilt-based gesture detection is error prone since it is highly depended on the unstable posture of participants. That is why the standard deviation (sd) of MR is 3, which is about 33% to its average performance and 21% higher than that of RC. 2) Learning curve The individual learning rate of each participant is graphically represented in the learning curve as shown in Figure 10. The six pairs of learning curves can be categorized into three types according to the space between them. For PID1 and PID2, the performance differences between MR and RC in each section are less than 5 tasks. The ratios of MR to RC are 76% and 88% respectively, which are the smallest spaces. For PID4 and PID5, the performance differences between MR and RC are about 10 tasks, and ratio is about 55%, which are the largest spaces. PID3 and PID6 are 63% and 71%, which are ranked in the middle. Such a large scale variation of learning performance is caused mainly by the following two factors. One is the gesture definition, some of which are not intuitive to be understood and performed. As mentioned in some researches, an intuitive gesture definition is crucial for usage performance and user satisfactory degree [11]. According to the questionnaire at end of the experiment, most of participants think there are no obvious relations between some gestures and the corresponding commands like Up for power on/off and Down for change channel. Meanwhile, some participants voted the left/right rotate for volume adjustment for intuitive to understand and easy to remember. According to the author’s observation, two suggestions should be taken into consideration when define a set of gestures for control purpose. First is that the gestures should be extracted from the habitual operation, such as the “rotate” action of thumb and index finger is often connected with volume adjustment of TV or radio.
low level only needs one step of action; middle level needs two steps; high difficult level needs three or four steps to complete a task. Third, each section is divided into three sub-sections (40 second per sub-section). For the 1st, 2nd, 3rd sub-section, observer randomly draws the cards from the low, middle, and high difficult level respectively. 3) Placement of Controllers The three RCs were put at hand of the participant. During the experiment, participant can put them all in his/her hands. MR was mounted on the finger consistently in the evaluation sections of MR. E. Experimental Results 1) Operating effectiveness The average number of completed tasks and standard deviation are listed in Table 4. Totally, in a same period of time, MR can complete about half number of tasks (53%) with RC. The result is reasonable for such a primitive experiment. To author’s opinion, three plus factors for RC and two negative factors for MR rooted in the experiment which lead such a result. First, participants are all familiar with the paradigm of RC-based control. Once they familiar with the arrangement of the buttons on RCs, almost all of them keep a stable high effectiveness (average tasks 17 and standard deviation (sd) is 2, about 12% of average performance). Second, the command recognition accuracy is nearly 100% for RC. Third, the accessing time for RC is not excluded. All the RCs are put at hand which is not a normal case in real life. In real life, RCs are placed randomly; users spend much more time on finding out the right RC before starting the control operation. Meanwhile, two negative factors exist for MR. First, MR-based control is a new
88
Figure 10. Individual learning curves of the six participants. Horizontal axis is the sections; vertical axis is the amount of completed tasks during that section
Second is that a pair of gestures should be defined for the same type of operation, such as the left/right rotate are defined for volume up/down. It is proved to be confusing and error prone when a pair of gestures is defined for different types of operation like finger Up/Down gestures mentioned above. Another is the tilt sensor based gesture recognition method. This method is proved to be unstable among different participants and need user-specific adjustment. But in this experiment, a set of predefined threshold tilt values are used throughout the whole experiment. Finally, as shown in Table 4, the ratio of MR to RC shows an obviously ascendant trend. Then, whether MR based control method can get an equal or better performance with RC-based method. To answer this question, a statistical learning curve using natural logarithmic function as fitting function and average number of completed tasks across all 6 participants as the sample data points as shown in Figure 11. The two logarithmic functions (1) and (2) are gotten from these sample points: y = 2.6186ln(x) + 14.159 (R2 = 0.945)
(1)
y = 3.098ln(x) + 6.1337 (R2 = 0.9538)
(2)
can be believed that the prediction of future trend is reliable enough. These functions predict that a competitive performance of MR can be gotten in an acceptable time as shown in Table 5. For example, in general, people can complete 15 tasks in one section after 34 minutes practice. 3) Fatigue As a key metrics to indicate the comfort of a wearable device [12], the physical and mental fatigue are investigated through the experiment process as well. Finger gesture involves more muscles than keyboard or speech, so it is not fit for long time continuous operation. But for the short-time operation like appliance control, the physical fatigue would be a minor factor to lead the performance decline comparing with the mental fatigue. In our experiment, participant needs to pay a higher cognitive burden when using MR since s/he has to recite a sequence of gestures to complete a given task. In fact, a preliminary experiment was performed before
TABLE 5. A PREDICTION ON THE PERFORMANCE OF MR Number of tasks (tasks per section) 15 20 25
where x is the sections, y is the number of completed tasks, and R2 is coefficient of determination, which bigger than 0.9
89
Necessary (minute)
practice 34 176 882
time
Figure 11. Statistical learning curve using logarithmic function as fitting function and average number of completed tasks as the sample data points
the main one. All the settings are just the same except the break time between two continuous sections was 30 seconds. Other 6 participants took that experiment, and 4 participants’ performance declined obviously from 3rd and 4th section. The questionnaire verified that they feel tired since it needs continuous concentration. Therefore, the break time is adjusted to 1 minute in the main experiment, and no obvious performance decline is observed in this time. From comparative analyze between the preliminary experiment and the main experiment, the fatigue of MR is mainly caused by the continuous concentration not the physical operation. Moreover, such fatigue can be quickly recovered after a short break.
REFERENCES [1]
R. Y. Wang, and J. Popovic, “Real-time hand-tracking with a color glove,” ACM Trans. Graph., vol. 28, no. 3, pp. 1-8, 2009. [2] P. Mistry, and P. Maes, “SixthSense: a wearable gestural interface,” in ACM SIGGRAPH ASIA 2009 Sketches, Yokohama, Japan, 2009. [3] P. Juha, E. Miikka, K. Panu et al., “Activity Classification Using Realistic Data From Wearable Sensors,” IEEE Transactions on Information Technology in Biomedicine, vol. 10, no. 1, pp. 119-128, 2006. [4] L. Dong-Woo, L. Jeong-Mook, J. Sunwoo et al., “Actual remote control: a universal remote control using hand motions on a virtual menu,” Consumer Electronics, IEEE Transactions on, vol. 55, no. 3, pp. 1439-1446, 2009. [5] B. Amento, W. Hill, and L. Terveen, “The sound of one hand: a wrist-mounted bio-acoustic fingertip gesture interface,” in CHI '02 extended abstracts on Human factors in computing systems, Minneapolis, Minnesota, USA, 2002. [6] M. Fukumoto, and Y. Tonomura, “Wireless FingeRing: A Bodycoupled Wearable Keyboard,” Information Processing Society of Japan Magazine, vol. 39, no. 5, pp.1423-1430, 1998. (in Japanese) [7] IEEE 802.15.4, "Wireless Medium Access Control (MAC) and Physical Layer (PHY) Specification for Low-Rate Wireless Personal Area Networks (LR-WPANs)," IEEE:New York, 2003. [8] Bluetooth SIG, "Bluetooth Specification V4.0," 2010. [9] "CUTEBOX," http://cutebox.wikispaces.com/. [10] L. Jing, Y. Zhou, Z. Cheng et al., “A Recognition Method for Onestroke Finger Gestures Using a MEMS 3D Accelerometer,” IEICE Tansaction on Information, 2011.in press [11] J. O. Wobbrock, M. R. Morris, and A. D. Wilson, “User-defined gestures for surface computing,” in Proceedings of the 27th international conference on Human factors in computing systems, Boston, MA, USA, 2009. [12] J. F. Knight, C. Baber, A. Schwirtz et al., “The Comfort Assessment of Wearable Computers,” in Sixth International Symposium on Wearable Computers, 2002. (ISWC 2002), pp. 65-72, Washingtong, USA.
VIII. CONCLUSION AND FUTURE WORK A gesture based method for appliance control is proposed and preliminary evaluated. Comparing with infrared remote controller, a competitive performance can be achieved after tens of minutes of practice. Moreover, the limitation of such a kind of control paradigm is pointed out, which is not fit for long-term continuous operation due to mental fatigue problem. In the future, effort should be taken to minimize the negative factors for the proposed method like the stable gesture recognition method and intuitive gesture definition. ACKNOWLEDGMENT The authors would like to appreciate the 12 participants who have kindly attended the evaluation experiment and give us valuable comments. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]
R. Y. Wang, and J. Popovic, “Real-time hand-tracking with a color glove,” ACM Trans. Graph., vol. 28, no. 3, pp. 1-8, 2009. P. Mistry , and P. Maes, “SixthSense: a wearable gestural in terface,” in ACM SIGG RAPH ASIA 2009 Sketches, Y o kohama, Japan, 2009. P. Juha, E. Miikka, K. Panu et al., “Activity Clas sification Using Realis tic Data From Wearable Sensors,” IEEE Transactions o n Inform ation Technology in B iomedicine, vo l. 10, n o. 1, pp . 119-128, 2006. L. Dong-Woo, L. Jeon g-Moo k, J. Sunwoo et al., “Actual remote control: a un iversal remote control using hand motions on a v irtual menu,” Con sumer Electro nics, IEEE Tran saction s on, v ol. 55, no. 3, p p. 1439-1 446, 20 09. B. Amento, W. Hill, and L. Terveen, “The sound of one hand: a wris t-mounted bio-acoustic fingertip ges ture interface,” in CHI '02 extended abstracts on Human factors in computing sy stems, Minneapo lis, Minneso ta, USA, 2002. M. Fu kumoto, and Y. T onomura, “Wireless Finge Ring : A Body -coupled Wearable Key board,” Information Processing Society o f Japan Maga zine, vo l. 39, no. 5, p p. 8, 19 98. I. 802.15.4, "Wireless Med ium Access Control ( MAC) and Phy sical Lay er (PHY) Specification for Low-Rate Wireless Personal Area Networks (L R-WPANs)," IEEE:New Yor k, 2 003. B. SIG, "Bluetooth Specificatio n V4.0, " 2010. "CUTE BOX," h ttp ://cutebo x.wikispaces.com/. L. Jing, Y. Zhou, Z. Cheng et al., “A Recognition Meth od for One-stro ke Finger Gestures Using a ME MS 3D Accelerometer,” IEICE Tansaction on In formation, 2 011.in press J. O. Wobbroc k, M. R. Morris, and A. D. Wilso n, “User-defined gestures for surface computing,” in Proceedings of the 27th internatio nal conference on Human factors in computing sy stems, Bosto n, MA, U SA, 2 009. J. F. Kn ight, C. Baber, A. Schwirtz et a l., “The Comfort Assessment of Wearable Computers,” in Sixth In ternational Sy mposium on Wearable Computers, 2002. (ISWC 2002). , 2006, pp. 65-72.
90