International Journal of Humanoid Robotics Vol. 10, No. 1 (2013) 1350006 (31 pages) c World Scienti¯c Publishing Company ° DOI: 10.1142/S0219843613500060
MODELING THE HUMAN BLINK: A COMPUTATIONAL MODEL FOR USE WITHIN HUMAN ROBOT INTERACTION
{
C. C. FORD Center for Robotics and Neural Systems, University of Plymouth, Room B106, Portland Square Building, Drake Circus, Plymouth, Devon, PL4 8AA, UK
[email protected] G. BUGMANN University of Plymouth, Portland Square Building, Drake Circus, Plymouth, Devon, PL4 8AA, UK
[email protected] P. CULVERHOUSE University of Plymouth, Room B106, Portland Square Building, Drake Circus, Plymouth, Devon, PL4 8AA, UK
[email protected] Received 16 August 2012 Accepted 21 January 2013 Published 2 April 2013
This paper describes ¯ndings from a Human-to-Human Interaction experiment that examines human communicative non-verbal facial behaviour. The aim was to develop a more comfortable and e®ective model of social human-robot communication. Analysis of the data revealed a strong co-occurrence between human blink production and non-verbal communicative behaviours of own speech instigation and completion, interlocutor speech instigation, looking at/ away from the interlocutor, facial expression instigation and completion, and mental communicative state changes. Seventy-one percent of the total 2007 analysed blinks co-occurred with these behaviours within a time window of +/ 375 ms, well beyond their chance co-occurrence probability of 23%. Thus between 48% and 71% of blinks are directly related to human communicative behaviour and are not simply \physiological" (e.g., for cleaning/humidifying the eye). Female participants are found to blink twice as often as male participants, in the same communicative scenario, and have a longer average blink duration. These results provide the basis for the implementation of a blink generation system as part of a social cognitive robot for human-robot interaction. Keywords: Social robotics; humanrobot interaction; human blink; human communication; mental communicative states; computational modeling; nonverbal behavior; blink model.
1350006-1
C. C. Ford, G. Bugmann & P. Culverhouse
1. Introduction Recent progress in humanrobot interaction (HRI) increases the possibility of the creation of a social cognitive robotic system that resembles the bodily structure of a human and that can comfortably exist in human social surroundings. Any truly useful cognitive social robot will need to be able to e®ectively communicate with its users.25 Humans use both verbal and nonverbal modes of communication, which include speech, auditory expression, facial expression, head motion, eye motion, gesture and pose, along with nonverbal facial behavior, which transmits the highest impact of the message being expressed6 all of which points to communication as an extremely complex cognitive act. Our focus is within HRI and is aimed at aiding users to more easily communicate with a humanoid robotic system, such that they will be able to understand the current mental state of the robot (i.e., understanding, misunderstanding, uncertainty or thought) and therefore be able to \stay in touch" and hopefully complete the communication to their satisfaction. We performed a humanhuman experiment with a protocol designed to elicit these mental communicative states, such that we could ascertain the common nonverbal facial behaviors that occur within each of these states over time, with the aim of de¯ning a model of nonverbal facial behavior within humanhuman communication restricted to these mental states. This report focuses on the blink aspect of our ¯ndings, derived from the analysis of blink co-occurrence with the other facial communicative behaviors in our data corpus. Our analysis both con¯rms existing knowledge of human blink behavior in communication and also augments it with new data, allowing for the creation of a more complete computational model of the human blink. A number of neurological, psychological and HRI-based studies have focused upon human blink behavior, the morphology of a blink, and then further, the creation of a human blink model. On the neurological side, a recent study of human blink behavior by BrefczynskiLewis et al.7 analyzes event-related potentials (ERPs) elicited by observing nontask relevant blinks, eye closure, and eye gaze changes in a centrally presented natural face stimulus and draws the conclusion that \small and task-irrelevant facial movements such as blinks are measurably registered by the observer's brain. This ¯nding is suggestive of the potential social signi¯cance of blinks. . . :" Researchers within psychology have repeatedly reported that human blink rates are raised when in communication with other humans, as compared to when reading, watching a video or at rest.1,815 They have also noted that blinks are correlated with other communicative behaviors, for example Condon and Ogston16 note \The eye blink has been found to occur during vocalization at the beginning of words or utterances, usually with the initial vowel of the word; at word medial syllabic change points; and precisely following the termination of a word. Thus, speed variations and eye blinks do not seem to occur randomly, but are also related to their on-going
1350006-2
Modeling the Human Blink
variations in the sense that if they occur, their point of occurrence may be relatively speci¯able." In a study parallel to our own, Cummins et al.11 recently reported on co-occurrences between gaze, start of own speech and blinks.11 They suggest that blinks and gaze are inseparably linked in a way that de¯nes individual communicative style. Our data concurs well with their ¯ndings in this area. The study and creation of eye movement/blink generation systems has steadily moved forward, speci¯cally within the area of avatar animation. Lee et al.17 have developed a seminal model of animated eye gaze for use with avatars that utilized the \alterEGO" facial animation system18 to produce animated blinks amongst other facial behaviors. They suggested, from their face tracking data, that eye blinks have a link to eye movement. Deng et al.19 followed up on the concept of eye blinks triggered through eye movement behavior. Their work stands out for its use of texture synthesis techniques (utilizing motion capture of human communicative behavior) to generate eye motion and blinking in avatars. Weissenfeld et al.20 created a detailed probabilistic model of eye movement generation utilizing actual image samples of human eye behavior and re°ecting the experimental link between blink behavior and eye movements (speci¯cally looking at/away from the interlocutor). Their system synthesized speci¯c image sets of eye behavior (incl. blinks) that would be expressed in real-time dependant on the eye movement requirements from the speech to be performed by the avatar. Blink morphology was never covered in these pieces of research and this has a major impact on the believability of the characters performance. Blink morphology has however recently been studied in detail by Trutiou et al. at Disney Research.21 They found that \Conventional methods for eye blink animation generally employ temporally and spatially symmetric sequences; however, naturally occurring blinks in humans show a pronounced asymmetry on both dimensions. . . . . .animated blinks generated from the human data model with fully closing eyelids are consistently perceived as more natural than those created using the various types of blink dynamics proposed in animation textbooks." Data available so far does not support the development of a generative model to express the mental communicative states of understanding, misunderstanding and uncertainty. Our results add to knowledge on human blink behavior by identifying the new blink behavior triggers of: facial expression onset/o®set, interlocutor speech onset and mental communicative state change, including newly investigated mental communicative states of understanding, misunderstanding and uncertainty. We also include gender based di®erences for blink triggers in all de¯ned communicative facial behaviors. Within blink morphology, we add the half blink type (where the upper eyelid only half covers the eye, but still covers the pupil, therefore inhibiting vision as would a standard blink) and gender di®erences in overall blink duration.
1350006-3
C. C. Ford, G. Bugmann & P. Culverhouse
{
2. Human Human Conversational Analysis Experiment 2.1. Experimental method 2.1.1. Experimental procedure The experimental protocol was designed to elicit the mental communicative states of \understanding", \misunderstanding", \uncertainty" and \thought" from participants, allowing the transcription and analysis of the facial nonverbal behavioral traits that each of these \mental communicative states" trigger during the communicative process. These were de¯ned based upon current natural language processing (NLP) system states, whilst processing human speech through programmed dialogues, to allow expression of speech response output to a user. NLP state de¯nitions: .
The \misunderstanding state" occurs when the NLP system cannot make sense of the received speech. . The \uncertainty state" occurs when more than one response possibility arise from the received speech. . The \understanding state" occurs when only one response is processed from the received speech. . The \thought state" occurs during any pause created by the NLP system whilst processing received speech. This state is further subdivided into \listening" and \processing" variants. These states run concurrently throughout any communication and can be judged to be extremely important for HRI when considering the feedback required for a user to \stay in touch" and complete a communication with a robotic system. To elicit these \states" and their associated facial behaviors over time, an interlocutor (experimenter) engaged with participants in a one-to-one communication based around a pre-de¯ned dialogue script. The script incorporated speci¯c sentence/word delays, noise (e.g., fake words), and errors (e.g., incorrect/misplaced words) allowing these four di®erent \mental communicative states" to arise at differing times throughout the communication. An example section of the dialogue script: Interlocutor: Do you know which star sign you are? Participant: SPEECH RESPONSE(Thought to Understanding or Uncertainty) Interlocutor: I was born in mid-November. Do you know which star sign I am? Participant: SPEECH RESPONSE(Thought to Understanding or Uncertainty) Interlocutor: I'm not sure I believe in the information gleaned from star signs, do you? Participant: SPEECH RESPONSE(Thought to Understanding or Uncertainty) Interlocutor: Hmm. That's interesting. So, why do you believe this? Participant: SPEECH RESPONSE(Thought to Understanding)
1350006-4
Modeling the Human Blink
Fig. 1. Experimental design.
Two laptop computers and four cameras were used to display and capture all nonverbal facial behavior during communication (Figs. 1 and 2). Each of the two webcams were attached to one of the laptop computers to display an image of the interlocutor and participants faces to each other on the laptop screen placed in front of them. Two camcorders were used to record the voice and facial movements of both the participant and interlocutor for post-experiment analysis. A separating panel prevented direct visual contact between interlocutor and participant. As previously expressed, the experimental setup was designed for the creation of communicative models for use in HRI systems providing primarily facial feedback,
Fig. 2. Experimental setup. 1350006-5
C. C. Ford, G. Bugmann & P. Culverhouse
such as the robotic \lightHead" system.22 Therefore, the experimental setup implemented this constrained visual communication channel through the use of face projection on the laptop screens. This forced communication between the interlocutor and participant to be based purely on speech and nonverbal facial behavior as opposed to complete nonverbal body language, including such elements as hand gestures and body pose. The cameras recording the participants and interlocutors facial behavior were placed above each laptop. Despite the cameras not being in the center of the screen, it was possible to determine correctly whether the participant/interlocutor was looking either at or away from the interlocutors face. This was acceptable for the requirements of the experiment as the exact gaze direction of the participant/interlocutor was not required. Despite the misalignment between the camera and the participant/ interlocutor face, communication between the participant and interlocutor appeared not to be a®ected as all participants except one looked at the screen during the communication and not directly at the video camera. The video recordings of the participant/interlocutor performances were made at 24 frames per second (fps), PAL standard and at a resolution of 720 576 pixels (576 p). The frame rate and frame resolution were high enough to gain the level of detail required to analyze and map the de¯ned nonverbal facial behaviors for model creation. Speci¯cally, when looking at human blink morphology, we see a mean minimum blink attack time (i.e. the mean of the lowest attack time from all participants) of 1/9 s (24=9 ¼ 2:66 . . . frames), which is well within a >1 accuracy. This information could be translated from the blink model to a face animation system, such as the \lightHead" system,22 which runs at 60 fps (60=9 ¼ 6:66 . . . frames) thus giving a °uid \blink" animation in a resolution that comfortably imitates human \blink" behavior. The interlocutor knew the requirements of the experiment and this knowledge may have led to a possible dominance e®ect on the participant (i.e., where the participants may have felt some level of subordination to the interlocutor during the communication), thus a®ecting the participant's nonverbal behavior throughout the communication. However, this e®ect, if present, would actually create suitable data for the design of appropriate robot communicative behavior in humanrobot relations, where at least one study has shown that human preference is for social robotic systems to express subservience to their human users.23 Facial video data from the conversations was captured to AVI video ¯le (including the dialogue audio stream) and the participants facial behaviors then transcribed for detailed analysis. The mental communicative states of thought, understanding, misunderstanding and uncertainty (see Fig. 3 Clockwise from top left) were derived based upon the experiment team's interpretations of the participants speech (context) and nonverbal facial behavior. Both written and verbal instructions were given to each participant prior to the beginning of the experiment and they then decided whether they still wished to 1350006-6
Modeling the Human Blink
Fig. 3. Snapshot of behavior from each \mental communicative state". (Clockwise from top left: thought, understanding, misunderstanding, uncertainty. Derived from participants speech (context) and nonverbal facial behavior.)
participate in the study. No instructions were given on communication behavior such that participants would then act naturally during the communication interactions. A total of thirteen participants took part in the experiment. All participants were native English-speaking students from the University of Plymouth, with gender split between seven males and seven females, and within an age range of 2745 years. Each participant was identi¯ed for the purposes of the experiment by a pre-generated participant number and their self-reported age and gender details. Data from one of our thirteen participants (no. 4, male) were not usable when analyzed as it was found that he had looked at the camera throughout the experiment process, and not at the interlocutors face projected on the laptop as instructed (Figs. 3 and 4) and as such this data could not be taken as evidence due to its e®ect on natural behavior. 2.1.2. Communicative facial behaviors Human social interaction is extremely complex and this shows explicitly in facial behavior during humanhuman communication, with subtle movement and interaction between mental, verbal and multiple nonverbal facial behaviors. Following is a list of the communicative facial behaviors recorded from the participant's communication during this study: .
Interlocutor Utterances (labeled as sSpeecha) Conversational utterances from the interlocutor (sSpeech) throughout the duration of the dialogue.
a The
sSpeech label de¯nes interlocutor speech, previously de¯ned as speaker speech. 1350006-7
C. C. Ford, G. Bugmann & P. Culverhouse // participant/ interlocutor utterances // participant eye movements // participant eye gaze // participant head movements // participant blink // participant facial expression // participant cognitive state // participant stare
Fig. 4. .
.
. . .
.
.
\Facial action encoding mark-up language" XML schema (\FaceML").
Participant Utterances (labeled as pSpeech) Conversational utterances from the participant (pSpeech) throughout the duration of the dialogue. Eye Movement (labeled as PEM) All visible participant eye movements, speci¯cally those for gathering information (i.e., looking at the face of the interlocutor) and those used in the process of thought (i.e., look up-left or up-right whilst processing an utterance). Eye Gaze (labeled as PEG) Direction of gaze, either at (ATF) or away (AWF) from the interlocutors face. Head Movement (labeled as PHG) Visible participant head movement (i.e., gaze following and head nodding/shaking). Eye Blink (labeled as PBL) Participants eye \blink" actions broken down into duration, closure type (half or full) and movement timings (attack (closing motion), sustain and decay (opening motion). Facial Expression (labeled as PFE) Communicative expressions made by the face/head. We have so far found seven main facial expressions within our data analysis process, these being the smile (happiness/understanding/agreement), pursed lips (thought, uncertainty), squint (thought, uncertainty), furrowed brow (uncertainty/misunderstanding), raised brows (uncertainty, thought), head nodding and head shaking (understanding). Mental Communicative State (labeled as PCO) Mental communicative state (thought (T), understanding (U), uncertainty (UU) 1350006-8
Modeling the Human Blink
and misunderstanding (M)) changes derived from all participant verbal and nonverbal facial behavior throughout the communication. . Stare (labeled as PST) Participants saccadic eye movement is inhibited, showing no apparent eye movement/scene processing, as though the participant is looking through the interlocutor. 2.1.3. Data transcription An XML script (Fig. 4) was created for transcription to enable encapsulation of these facial behaviors over time, building a detailed corpus of the participant's facial behavior during the conversational dialogue. Prior XML scripts have been created such as \HumanML" (human mark-up language) and MURML (multimodal utterance representation mark-up language),24,25 but these were unsuitable for encapsulating all of the elements of the behavioral characteristics that we wished to annotate, therefore we created our own XML script and schema entitled \FaceML" (facial action encoding mark-up language) for this purpose. The analysis process initially transcribes the data from the AVI videos of participant/interlocutor interactions based along the facial communicative behaviors (see 2.1.2) using the \FaceML" mark-up language, and then converted to CSV ¯le format through a C# based XML text parser, producing the following temporal timeline output (Fig. 5 single participant/partial dialogue) within MS Excel. The complexity of human communicative behavior makes transcription of facial expression and mental communicative state behaviors di±cult in the sense that they are tacit within human communication and as such we analyze them in a tacit manner. Speci¯cally, facial expression onset and o®set are clearly easy to de¯ne, however, the di®erence between expression types (a squint and furrowed brows for example) are sometimes di±cult to ascertain. Mental communicative state change
Fig. 5. Behavior timeline from Participant 6 XML transcription. (24 frames per second (fps)) (See 2.1.2 for communicative facial behavior descriptions.) 1350006-9
C. C. Ford, G. Bugmann & P. Culverhouse
onsets pose an even greater problem to de¯ne as these are annotated based upon temporal performance of all other facial behaviors and speech context. An example of the behaviors within a \thought mental state" follows: .
Blink (Long) at \Thought" Mental State Onset Eyes look away from interlocutor (breaking shared attention). . Head rotates 10 . Head angled 5 . . Smile starts at end point. . Duration 15 frames. .
The XML data was then analyzed with speci¯c respect to co-occurrence between all recorded nonverbal facial behaviors, within the stated 24 frames per second (fps) resolution. 2.2. Results 2.2.1. Blinks in human conversational behavior Our analysis has shown a signi¯cant blink co-occurrence with other conversational behaviors (i.e., the blink instigation overlaps either the onset or o®set of a communicative facial behavior occurrence). A few examples of these blink co-occurrences can be seen in Figs. 5 and 7 (followed by the results fully displayed in 2.2.2). Figure 5 (above) shows that all participant blinks (PBL-ACTUAL), bar one, occur at the same time as either the start of participant speech (pSpeech onset), the start of a thought process (PCO-T), the start of looking away from the interlocutors face (PEG-AWF), the end of participant speech (pSpeech o®set) and/or the end of interlocutor speech (sSpeech o®set). Figure 6 shows the participant looking at and away from the interlocutors face during mental communicative state changes between \thought" (PCO-T) and \understanding" (PCO-U). Blinks in this instance correlate well with these mental
Fig. 6. Behavioral timeline showing a subset of looking at/away and mental communicative state change behaviors Participant 6. 1350006-10
Modeling the Human Blink
Fig. 7.
Blinks relating to utterance behavior timings Participant 6.
state changes and looking at/away (head and eye movement) behaviors. Note that all behavioral timeline graphs show timelines based upon camera frame counts, as this was the video sample interval (set at 24 fps). Figure 7 shows a participant's blink actions based upon their own (pSpeech) and their interlocutor's utterances (sSpeech). Strong blink co-occurrence between both utterance onset/o®set behaviors is displayed. The third blink (frame 86) is not related to an utterance behavior. 2.2.2. Blink co-occurrence results 2.2.2.1. Blink co-occurrence rates per participant Table 1 indicates the overall participant co-occurrence rate of blinks with all communicative facial behaviors (see Sec. 2.1.2) of interlocutor speech (onset/o®set), Table 1. Blink/communicative facial behavior co-occurrence. Total dialogue duration (frames @ 24 fps)
Total no. ðmÞ of blinks performed
Total no. ðnÞ of blinks co-occurring with a behavior (within þ= 375 ms)
Blink co-occurrence % ðm=nÞ
Random blink co-occurrence chance % (within þ= 375 ms)
01/M 02/M 03/M 05/M 06/M 09/M 07/F 08/F 11/F 12/F 13/F 14/F
9233 7355 6866 5433 5816 7450 7710 10075 8392 8630 10818 9495
76 49 134 124 135 63 272 350 232 166 237 169
64 37 100 78 92 48 201 286 138 102 164 116
84% 76% 75% 63% 68% 76% 74% 82% 60% 61% 69% 69%
11% 13% 21% 18% 21% 9% 49% 47% 30% 21% 25% 16%
TOTAL
97273
2007
1430
71%
23%
Participant number/ gender
1350006-11
C. C. Ford, G. Bugmann & P. Culverhouse Table 2. Total number of observed communicative facial behaviors. Participant no./gender 01/M 02/M 03/M 05/M 06/M 09/M 07/F 08/F 11/F 12/F 13/F 14/F TOTAL
Participant speech
Interlocutor speech
Looking at/away
Facial expr.
Mental state change
238 156 108 62 70 100 174 222 112 106 128 88
148 136 98 80 84 150 216 234 154 112 160 114
43 11 37 5 58 22 33 101 29 35 68 65
164 36 110 40 46 90 128 156 128 146 206 124
105 67 64 65 77 82 74 132 112 126 165 119
1564
1686
507
1374
1188
Participant speech (onset/o®set), looking at/away, facial expression (onset/o®set) and mental communicative state change. On average, 71% of participants blinks co-occur with these behaviors, well above the random blink co-occurrence chance of 23%. (The random blink co-occurrence chance % is the probability that a blink and communicative facial behavior both fall within a þ=375 ms (19 frames) window.) It is worth noting that the average blink frequency of all 2007 participants blinks is 30 blinks/min, which is similar but slightly raised above the value of 26 blinks/min during conversation (compared to 17 blinks/min during rest and 4.5 blinks/min during reading), as reported by Bentivoglio et al.9 2.2.2.2. Blink co-occurrence rates with speci¯c facial behaviors In this section, we examine the communicative facial behaviors of speech onset/o®set and looking at/away from the interlocutor, facial expression onset/o®set and mental communicative state change (see Sec. 2.1.2) and their associated blinks to ¯nd which behaviors had co-occurrence values beyond their chance values and thus commonly trigger a human blink action within communication. Note that the mental communicative state change behavior is a subjective entry as this behavior was assigned visually by the research team annotating the video corpus. Tables 3 and 4 give the fraction of communicative facial behaviors during which a blink is observed and their average over all participants. (co-occurrence average de¯nes the number of the communicative facial behavior onset/o®set (from the participants total) that co-occur with a blink(within a þ=375 ms (19 frame) capture window). Co-occurrence chance (%) is the average probability that one of the facial behaviors and one of the blinks both fall by chance (within a þ=375 ms (19 frame) capture window). Table 3 clearly shows a strong blink co-occurrence with looking at/away changes and participant speech (speci¯cally participant speech onsets) and Table 4 also 1350006-12
Modeling the Human Blink Table 3. Fraction of communicative facial behaviors during which a blink starts (within a þ=375 ms window from the behavior's onset/o®set). p(blink/behavior) Part I. Participant number 01 Chance 02 Chance 03 Chance 05 Chance 06 Chance 09 Chance 07 Chance 08 Chance 11 Chance 12 Chance 13 Chance 14 Chance
% % % % % % % % % % % %
Totals : Co-Occur Average Co-Occur Chance %
Participant speech (on/o®)
Interlocutor speech (on/o®)
Looking (at/away)
26% (33%/19%) 8% 20% (27%/13%) 5% 38% (48%/28%) 11% 68% (87%/48%) 9% 59% (77%/40%) 9% 21% (28%/14%) 4% 56% (63%/48%) 29% 65% (78%/52%) 26% 35% (36%/34%) 13% 36% (53%/19%) 9% 52% (59%/45%) 9% 50% (64%/36%) 6%
14% (8%/20%) 5% 8% (9%/7%) 4% 35% (18%/51%) 10% 18% (8%/28%) 12% 49% (38%/60%) 11% 17% (13%/20%) 6% 64% (63%/65%) 36% 66% (60%/72%) 27% 53% (55%/51%) 18% 26% (23%/29%) 9% 39% (41%/38%) 12% 44% (39%/49%) 8%
51% (41%/62%) 1% 55% (33%/80%) 1% 81% (68%/94%) 4% 80% (67%/100%) 1% 50% (59%/41%) 7% 73% (64%/82%) 1% 70% (59%/81%) 5% 92% (88%/96%) 12% 48% (47%/50%) 3% 60% (61%/59%) 3% 53% (59%/47%) 5% 75% (85%/66%) 4%
44% (54%/33%) 12%
36% (31%/41%) 13%
66% (61%/72%) 4%
clearly shows a strong blink co-occurrence with mental communicative state change which are all conclusively above their chance levels. A signi¯cant proportion of blink co-occurrence rates in our experiment are well above their chance level (Tables 3 and 4) which would suggest that a large fraction of blinks are generated as part of communicative behavior and are not just a baseline increase of the physiological blink frequency.1 Our additions to human blink behavior are within blink behavior triggers of facial expression onset/o®set, interlocutor speech onset and mental communicative state change and therein the inclusion of additional mental communicative states of understanding, misunderstanding and uncertainty. We also include gender based di®erences (Table 5) for blink triggers in all de¯ned communicative facial behaviors (Sec. 2.1.2). The co-occur average shown in Tables 3 and 4 is currently averaged for all participants and as such creates androgynous values. Table 5 displays the gender di®erences, allowing for the creation of speci¯c gender based blink models. Females seem to blink almost twice as much during communication as their male counterparts (20 bpm male, 38 bpm women), however, there are still di®erences in blink rate per participant, which change signi¯cantly between participants within 1350006-13
C. C. Ford, G. Bugmann & P. Culverhouse Table 4. Fraction of communicative facial behaviors during which a blink starts (within a þ=375 ms window from behavior onset/o®set). p(blink/behavior) Part II. Participant number and chance % 1 Chance 2 Chance 3 Chance 5 Chance 6 Chance 9 Chance 7 Chance 8 Chance 11 Chance 12 Chance 13 Chance 14 Chance
Facial expression (on/o®)
Mental communicative state change
11% (9%/12%) 5% 17% (28%/6%) 6% 29% (33%/24%) 12% 33% (25%/50%) 6% 35% (39%/30%) 6% 13% (17%/9%) 4% 38% (33%/23%) 21% 43% (55%/31%) 18% 41% (38%/44%) 15% 26% (30%/22%) 12% 27% (27%/27%) 15% 38% (50%/26%) 8%
33% 3% 33% 12% 66% 7% 38% 10% 53% 10% 70% 3% 70% 12% 34% 15% 54% 13% 35% 10% 58% 12% 53% 8%
29% (32%/25%) 11%
50% 10%
% % % % % % % % % % % %
Total: Co-Occur Average Co-Occur Chance %
Table 5. Gender-based fraction of communicative facial behaviors during which a blink starts (within a þ=375 ms window from the behavior's onset/o®set). p(blink/behavior) Part III. pSpeech (on/o®)
sSpeech (on/o®)
Looking (at/away)
Facial expression (on/o®)
Mental communicative state change
Male: Co-Occur Average Co-Occur Chance %
39% 8%
24% 8%
65% 3%
23% 7%
49% 8%
Female: Co-Occur Average Co-Occur Chance %
49% 15%
49% 18%
66% 5%
36% 15%
51% 12%
each gender group (Table 1). This high female blink rate leads to slightly higher average blink co-occurrence coverage across each of the communicative facial behaviors (Table 5), although with only very slight gains with the behaviors of looking at/away and mental communicative state change. The average blink cooccurrence values, even within the female gender, with their increased blink rate, are still signi¯cantly above chance. 1350006-14
Modeling the Human Blink
2.2.2.3. Blink co-occurrence through blink presence display Blink co-occurences can also be visualized using \blink presence" histograms in Figs. 817. These represent the presence of blinks within a 72 to þ72 frame (3þ3 s) time window around a communicative facial behavior onset or o®set.
Fig. 8. Blink presence surrounding \Looking At" behavior events.
Fig. 9.
Blink presence surrounding \Looking Away" behavior events. 1350006-15
C. C. Ford, G. Bugmann & P. Culverhouse
Fig. 10. Blink presence surrounding \Participant Speech Onset" behavior events.
Blink presence de¯nes where a blink exists at a speci¯c time. For example a blink lasting (and having a presence of) 10 frames would contribute to 10 bins in the histogram from its inception point. Given the average duration of a blink (8 frames) even perfect synchronization between blink and behavior would result in a peak of width 8 in the histogram starting from the bahavior onset/o®set (at frame 0). On the
Fig. 11. Blink presence surrounding \Participant Speech O®set" behavior events. 1350006-16
Modeling the Human Blink
Fig. 12. Blink presence surrounding \Interlocutor Speech Onset" behavior events.
other hand, blinks not synchronized with the behavior under consideration will contribute to an average \background level" of blink presence. Caution should be taken with the interlocutor speech (labeled sSpeech in the ¯gures) o®set behavior peak displayed in Fig. 13, as this, as we show through Figs. 17(a) and
Fig. 13. Blink presence surrounding \Interlocutor Speech O®set" behavior events. 1350006-17
C. C. Ford, G. Bugmann & P. Culverhouse
Fig. 14. Blink presence surrounding \Facial Expression Onset" behavior events.
17(b) is largely an e®ect of participant speech (pSpeech) onset behaviors occurring almost concurrently. The \looking at" behavior has a high co-occurrence (66%) above chance (4%) (Table 3). Figure 8 shows blinks occuring consistently between 6 to þ2 frames around the instantiation of this behavior. This shows that, during dialogue-based communication we generally blink closely around starting the movement of the head
Fig. 15. Blink presence surrounding \Facial Expression O®set" behavior events. 1350006-18
Modeling the Human Blink
Fig. 16. Blink presence surrounding \Mental Communicative State Change" behavior events.
and eyes to look at the interlocutor, thus we propose that this (along with the head and eye movement back to the interlocutor) could be used as a cue of attention. The \looking away" behavior also shows a high behavior co-occurrence (69%) above chance (4%) (Table 3). Figure 9 shows blinks occuring consistently between 2 to þ4 frames around the instantiation of this behavior. This displays the e®ect that during dialogue-based communication we generally blink closely around starting the movement of the head and eyes to look away from the interlocutor, thus we propose that this (along with the head and eye movement away from the interlocutor) gives a cue to the interlocutor that we are entering a thought state and for them to await our response. The \participant speech onset" behavior shows a high behavior co-occurrence (52%) above chance (12%) (Table 3). Figure 10 shows blinks occuring consistently between 6 to þ4 frames around the instantiation of this behavior. We propose that this could reinforce the signal from the participant that they are taking their turn to speak. The \participant speech o®set" behavior displays a behavior co-occurrence e®ect (33%) above chance (12%) (Table 3). Figure 11 shows blinks occuring between 8 to þ14 frames around the instantiation of this behavior. This result shows that during dialogue-based communication, we occassionally blink around the completion of a speech segment, which may give a cue to the interlocutor that it is their turn within the dialogue °ow. The \interlocutor speech onset" behavior shows in frequent blink behavior upon completion of a facial expression despite a behavior co-occurrence e®ect (35%) above chance (13%) (Table 3). Figure 12 however does show a higher blink count occuring 1350006-19
C. C. Ford, G. Bugmann & P. Culverhouse
(a)
(b) Fig. 17. (a) \Participant speech onset" events surrounding \Interlocutor speech o®set" events. (b) Modeled \Participant speech onset" blink behavior surrounding \Interlocutor speech o®set" events.
consistently both between 46 to 20 frames and 8 to þ4 frames around the instantiation of this behavior. Hence, we propose that blink behavior is occassionally used as acknowledgement of interlocutor start of speech and as such, acceptance by the interlocutor of their turn within the dialogue °ow. The \interlocutor speech o®set" behavior shows strong behavior co-occurrence (43%) above chance (13%) (Table 3). Figure 13 shows blinks occuring consistently between 2 to þ8. However, there is evidence that these blinks are actually associated with the start of speech of the participant taking his turn. The following analysis provides this evidence. 1350006-20
Modeling the Human Blink
The \facial expression onset" behavior displays a behavior co-occurrence e®ect (33%) above chance (11%) (Table 4). Figure 14 shows blinks occuring consistently between 2 to þ20 frames around the instantiation of this behavior. This shows that during dialogue-based communication we occassionally blink prior to facial expression instigation, thus we propose that this could be used as a cue of semantic accentuation. The \facial expression o®set" behavior shows infrequent blink behavior when a interlocutor starts speaking despite a behavior co-occurrence e®ect (29%) abovechance (11%) (Table 4). Figure 15 however does show a higher blink count occuring consistently between 30 to þ2 frames around the instantiation of this behavior. Hence, we propose that blink behavior is occassionally used to accentuate facial expression completion. The \mental communicative state change" behavior has a high behavior cooccurrence (66%) above chance (4%) (Table 4). Figure 16 shows blinks occuring consistently between 2 to þ12 frames around the instantiation of this behavior. This shows that during dialogue-based communication, we generally blink closely around a change in mental communicative state (i.e., between thought, understanding, uncertainty and misunderstanding), thus we propose that this (along with concurrent behavior instigations) could be used as a cue of attention for the current state, thus a®ecting an interlocutors grounding behavior (such as turn-taking). A histogram of the \participant speech onset" times surrounding \interlocutor speech o®set" behaviors [Fig. 17(a)] displays a trend similar to Figs. 13 and 17(b) of \blink presence" surrounding \interlocutor speech o®set" behaviors. Based upon Fig. 10, which shows participant speech onset co-occuring blinks to closely surround the onset of participant speech, and using information from Fig. 14 displaying the most common blink duration to be 8 frames (334 ms) we created a blink behavior model. By applying this model to the participant speech onset data from Fig. 17(a) we produced a synthetic blink presence histogram shown in Fig. 17(b). The moderately strong Pearson Correlation (r ¼ 0:862, p ¼ 0:05) between this modeled \participant speech onset" blink behavior [Fig. 17(b)] and the output of the captured \interlocutor speech o®set" blink behavior (Fig. 13) leads us to propose that the \interlocutor speech o®set" behavior results (Table 3) are actually produced mainly through Participant Speech Onset blink behavior and, as such, are not commonly used as a trigger for blink generation during dialogue-based communication as their behavior co-occurrence value suggests. Based upon these results, a blink generation model (Fig. 18) has been created that utilizes eight of the nine behaviors analyzed (i.e., \participant speech onset/o®set", \interlocutor speech onset", \looking at/away" from the interlocutors face, \facial expression onset/o®set" and \mental communicative state change") as blink triggers along with their respective mean average co-occurrence weightings (Tables 3 and 4, and Fig. 19).
1350006-21
C. C. Ford, G. Bugmann & P. Culverhouse
Fig. 18. \Blink Model" °ow diagram.
Fig. 19. Sample of participant 3 behavior data (lines 29) and output of the computational blink model (line 1 labeled PBL MODEL). A constant blink duration of 8 frames (333 ms) was used.
2.2.3. Modeling the blink 2.2.3.1. Toward a humanlike blink generation model Our proposed blink generation model (Fig. 18), is a part of an overall facial nonverbal expression system, which uses as input the detection of the interlocutor's speech 1350006-22
Modeling the Human Blink
Fig. 20. Blink morphology module.
onset/o®set and internal \mental" states generated from a NLP system (Sec. 2.2.1). From the mental states, which will de¯ne a sequence of facial communicative behaviors, which are in turn used to trigger blinks based on their probabilities (Tables 3 and 4)? If a blink occurrence is triggered, all other communicative behaviors are ignored until the next update function call. The position (or delay) in producing the blink action is de¯ned through the blink position probability curve of the behavior in question (Figs. 812 and 1517) and the blinks physical morphology is determined through the blink morphology module (Fig. 20) which is discussed in Sec. 2.2.3.3. A physiological blink mechanism (for cleaning/humidifying the eye) is included in the model, commonly performing a blink action within a timeframe of 1.9610.2 s (mean 4.78 s) where no prior blink has been instantiated.10 Therefore, the blink model triggers a blink action either on receiving and accepting a user speech onset, a robot/avatar own speech onset/o®set, looking at/ away, facial expression onset/o®set and mental communicative state change behavior [based on their speci¯c probability weighting value (Fig. 18)] or when a physiological blink is instantiated. 2.2.3.2. Full model implementation scenario An example scenario of the computational blink model (Fig. 18) functioning as part of a higher level nonverbal facial behavior system can be conceived as follows. 1350006-23
C. C. Ford, G. Bugmann & P. Culverhouse
The output from the robotic system will be controlled by a simple grounding model which enters a \listening state" when awaiting input by a user. This will require the NLP system to await verbal input from the user. During this time, the robotic system will be utilizing face tracking to face the user, utilizing research by Raidt et al.14 for surveying the face of the user, looking in a repeating triangular motion from each eye to the mouth (where most of the attention time is given) until the user has ¯nished communicating. Upon completion of user speech, the robotic system will express its internal mental communicative state, as identi¯ed from active processes within the NLP system (Sec. 2.1.1). For instance, it may look up and away from the user to display a \thought" mental state while trying to resolve an ambiguity. Upon completion of input processing, the robotic system may enter a \speaker state", to express its response, for instance, look back at the user and instigate its speech and nonverbal facial behavior (including blinks produced by the blink model). The blink model (including physiological blink system) is conceived to be engaged at all times whilst the robot is in communication with the user. Physiological blink generation will always be active, no matter the status of the robotic system. However, once the robot is engaged in communication, the blink model will also trigger communicative blinks according to their probability of occurrence in our data (Tables 3 and 4). An example is shown in Fig. 19, where a subset of participant 3's communicative behaviors is used as input triggers to the blink model. Figure 19 shows blinks generated by the model co-occurring with some of the subjects blinks for following behaviors: .
\PSpeech" behavior (frame 60) is matched by both a human and a model blink. \Looking Away" behavior (frame 283) and \Looking At" behavior (frame 295) are matched by both a human and a model blink. . \Looking Away" behavior (frame 412), \Looking At" behavior (frame 433) and \pSpeech" behavior are each matched by both human and model blinks. . The model however, does miss the \pSpeech" behavior (frame 353), which cooccurred with a very long (and rare) 1þ sec human blink. .
Some di®erences between human and model blink behavior are to be expected, with the model both missing blinks and also adding blinks (where participants did not instantiate one within the captured data), due to the probabilistic nature of the model design. 2.2.3.3. Morphology of a blink The morphology of the blink within our XML schema is de¯ned through its type of either half or full (where a \half " type is one where the eyelid is only half closed when the blink action is performed), duration (in parts of a second), which is further broken down into attack (the closing duration of the eyelid), sustain (the duration the eyelid is kept closed) and decay (the opening duration of the eyelid) as displayed in Table 6. Blink morphology occurrence is split between 91% \full" blinks and 9% \half " blinks within the 2007 blinks captured. \Multiple blink" actions (where smaller
1350006-24
Modeling the Human Blink Table 6. Blink morphology statistics. Single Blink Occurrence Multiple Blink Occurrence Occurrence of Double vs. Triple Blink Occurrence of Full vs. Half Blink Type Full Blink Duration (Mean) Full Blink Duration (Standard Deviation) Full Blink Attack (Mean) Full Blink Attack (Standard Deviation) Full Blink Sustain Full Blink Decay (Mean) Full Blink Decay (Standard Deviation) Half Blink Duration (Mean) Half Blink Duration (Standard Deviation) Half Blink Attack (Mean) Half Blink Attack (Standard Deviation) Half Blink Sustain Half Blink Decay (Mean) Half Blink Decay (Standard Deviation)
85% 15% 80% vs. 20% (of above 15% total) 91% vs. 9% 432 ms 72 ms 111 ms 31 ms Remainder of duration 300 ms 123 ms 266 ms 40 ms 97 ms 28 ms Remainder of duration 148 ms 64 ms
duration blinks happen almost concurrently to each other) make up approximately 15% of total blinks performed. Approximately 20% of these \multiple blinks" will be in the form of a \triple blink" (three blinks in quick succession) as opposed to a \double blink" (two blinks in quick succession). The triple blink has the same duration, attack, sustain, and decay morphology as a double blink. Figure 20 represents the blink morphology as a computational module for use within the complete computational blink model (Fig. 18), upon instantiation of a cooccurring blink.
Fig. 21. Histogram of all blink durations (12 subjects). 1350006-25
C. C. Ford, G. Bugmann & P. Culverhouse
Fig. 22. Histogram of male blink durations (six subjects).
Of interest, Fig. 21 displays the duration values of all 2007 blinks captured in our corpus. The modal blink duration over all 12 participants is 8 frames (333 ms). Figures 22 and 23 display blink duration values for male and female participants, respectively. The gender based modal values of blink duration are male ¼ 8 frames (333 ms) and female ¼ 10 frames (417 ms).
Fig. 23. Histogram of female blink durations (six subjects). 1350006-26
Modeling the Human Blink
2.3. Discussion The presented analysis of humanhuman conversations has established clear links between eye movement, speech, mental state changes and other facial behaviors in social interaction. Our results show that blink actions are strongly linked to onsets and o®sets of communicative facial behaviors and are not just physiological (i.e., eye surface cleaning/humidifying) in nature (Tables 3, 4 and Figs. 816). This study has extended current knowledge through the identi¯cation of new blink triggers: facial expression onset/o®set, interlocutor speech onset and mental communicative state change (including newly investigated mental communicative states of understanding, misunderstanding and uncertainty). Therefore, 71% of blinks can now be associated with communicative behavior. This however, we must state, is not evidence that blinks are carrying communicative information in their own right. We also produce data on gender based di®erences for blink triggers in all de¯ned communicative facial behaviors. Within blink morphology, we add the half blink type (where the upper eyelid only half covers the eye, but still covers the pupil, therefore inhibiting vision as would a standard blink) and gender di®erences in overall blink duration. We used blink presence histograms within Figs. 816 to visualize the blink cooccurrence associated with facial communicative behavior. During the analysis process, it was found that interlocutor speech (sSpeech) o®set was not a common blink trigger in human communicative behavior as it is high cooccurrence values were actually triggered signi¯cantly by the own speech (pSpeech) onset behavior [Figs. 13, 17(a) and 17(b)], hence blinks are not generally used as acknowledgement of turn taking in our experiment, with only 3% of total blinks cooccurring with this behavior. Further data captured in the analysis process also shows that individual participant's facial behaviors were variable in number (Table 2) and morphology. Participants also varied in the number of blinks performed (Table 1) and in the length of time taken to conclude the dialogue (Table 1), however initiating conversation and smooth turn-taking therein were never a®ected by these individual communicative behavioral traits. It is important to state that the current blink model is \androgynous" in nature, taking its blink behavior from both male and female participants (Tables 3 and 4). It would be very simple however, to implement gender-speci¯c blink models (incl. blink morphology) with our current data corpus (Table 5) by altering the blink trigger probability weighting values and morphology timings. Allowing users to choose the gender of their robot would likely be important within the ¯eld of humanoid robotics, making their robot behave in a manner suitable to their gender choice made. Further, if we restrict the model to data from a single participant, the robot would be able to imitate a speci¯c person's nonverbal communicative behaviors. One could also mix and match participants' data, allowing for tuning the expression level of the
1350006-27
C. C. Ford, G. Bugmann & P. Culverhouse
robot, leading to an optimization of its communicative behavior/humanlike qualities, especially if controlled by dialogue context in real-time. These are interesting areas of future work and would likely lead to improvement of the humanlike qualities of both our current and future models.
3. Conclusion Early research posited that blinks were solely physiological in function. Successive researchers have however, progressively identi¯ed some behavioral correlates of blinking, such as start of own speech, gaze shifts (toward and away from an interlocutor) and changes of limited mental states. Thus, an increasing fraction of blinks were given a communicative, rather than purely physiological function. This paper has extended our knowledge of human blink behavior, showing that 71% of blinks co-occur with communicative facial behaviors through the identi¯cation of new blink triggers of facial expression onset/o®set, interlocutor speech onset and mental communicative state changes (including newly investigated mental communicative states of understanding, misunderstanding and uncertainty). At present, 29% of blink are still not assigned to cognitive communicative processes and are, by default, termed physiological in our model. Future research will look at improving our understanding of human blink behavior in communication, and thus enriching the blink model with further facial behaviors (not least that of freeform eye and head movements) to further lower, and hence improve the embedded physiological blink process' co-occurrence behavior. Also, user testing the model on the \LightHead" robotic head system22 will allow us to evaluate the perceptual impact of the model's human-likeness. This study has also contributed data on gender based di®erences, revealing a higher blink rate and blink co-occurrence within female participants. Also, within blink morphology, we added the half blink and revealed gender di®erences in blink duration. The creation of supplementary communicative facial behavior models beyond the blink model, such as a generative nonverbal facial expression model, will form the basis of an overall facial nonverbal behavior system outlined in this paper. Such an incrementally expanding system is expected to further improve HRI with each additional model incorporated.
References 1. M. Al-Abdulmunem and S. T. Briggs, Spontaneous blink rate of a normal population sample, International Contact Lens Clinic 26 (1999) 2932. 2. A. R. Bentivoglio, S. Bressman, B. E. Cassetta, C. Donatella, P. Tonali and A. Albanese, Analysis of blink rate patterns in normal subjects, Movement Disorders 12 (1997) 10281034. 3. C. Breazeal, Designing Sociable Robots (Cambridge, Massachusetts: The MIT Press, 2002), p. 282. 1350006-28
Modeling the Human Blink
4. J. A. Brefczynski-Lewis, M. Berrebi, M. McNeely, A. Prostko and A. Puce, In the blink of an eye: Neural responses elicited to viewing the eye blinks of another individual, Frontiers in Human Neuroscience 5 (2011) 18. 5. G. Bugmann and S. N. Copleston, What can a personal robot do for you?, in TAROS 2011 (She±eld, UK: Springer, 2011), pp. 360371. 6. T. K. Capin, E. Petajan and J. Ostermann, Very low bitrate coding of virtual human animation in Mpeg-4, Encyclopedia of Telecommunications 17 (2000) 209231. 7. L. G. Carney and R. M. Hill, The nature of normal blinking patterns, Acta Ophthalmologica 60 (1982) 427433. 8. W. S. Condon and W. D. Ogston, A segmentation of behaviour, J. Psych. Res. 5 (1967) 221235. 9. F. Cummins, Gaze and blinking in dyadic conversation: A study in coordinated behaviour among individuals, Language and Cognitive Processes, Psychology Press (2011), 125. 10. F. Delaunay, J. De Gree® and T. Belpaeme, Towards retro-projected robot faces: An alternative to mechatronic and android faces, in Robot and Human Interactive Communication, 2009. RO-MAN 2009. The 18th IEEE International Symposium on (Toyoma, Japan, 2009), pp. 306311. 11. Z. Deng, J. P. Lewis and U. Neumann, Automated eye motion using texture synthesis, IEEE Computer Graphics and Applications, March/April 2005, pp. 2430. 12. M. J. Doughty, Consideration of three types of spontaneous eyeblink activity in normal humans: During reading and video display terminal use, in primary gaze, and while in conversation, Optometry & Vision Science 78 (2001) 712725. 13. C. Evinger, K. A. Manning, J. J. Pellegrini, M. A. Basso, A. S. Powers and P. A. Sibony, Not looking while leaping: The linkage of blinking and saccadic gaze shifts, Experimental Brain Research 100 (1994) 337344. 14. A. Hall, The origin and purposes of blinking, British Journal of Ophthalmology 29 (1945) 445467. 15. M. Koneya and A. Barbour, Louder Than Words Nonverbal Communication (Columbus, Ohio, USA: Merrill Publishing Company, 1976). 16. A. Kranstedt, S. Kopp and I. Wachsmuth, A multimodal utterance representation markup language for conversational agents, in Embodied Conversational Agents: Let's Specify and Compare Them!, Workshop Notes AAMAS, ed. by Andrew Marriot (Bologna, Italy, 2002), pp. 16. 17. S. P. Lee, J. B. Badler and N. I. Badler, Eyes alive, ACM Trans. Graph. 21 (2002) 637644. 18. L. Oestreicher, Cognitive, social, sociable or just socially acceptable robots?, in Robot and Human Interactive Communication, 2007. RO-MAN 2007. The 16th IEEE International Symposium on (Jeju Island, Korea, 2007), pp. 558563. 19. , Providing for social acceptance in task modelling for robots, in Robot and Human Interactive Communication, 2007. RO-MAN 2007. The 16th IEEE International Symposium on (Jeju Island, Korea, 2007), pp. 8186. 20. L. Oestreicher and K. S. Eklundh, User expectations on human-robot co-operation, in Robot and Human Interactive Communication, 2006. RO-MAN 2006. The 15th IEEE International Symposium on (University of Hertfordshire, Hat¯eld, UK, 2006), pp. 9196. 21. J. Peltz and R. K. Thunga, Humanml: The Vision (USA, 2005). 22. E. Ponder and W. P. Kennedy, On the act of blinking, Experimental Physiology (1927) 89110. 23. S. Raidt, G. Bailly and F. Elisei, Gaze patterns during face-to-face interaction, in Proceedings of the 2007 IEEE/WIC/ACM International Conferences on Web Intelligence
1350006-29
C. C. Ford, G. Bugmann & P. Culverhouse
and Intelligent Agent Technology Workshops (Silicon Valley, California, USA: IEEE Computer Society, 2007), pp. 338341. 24. L. C. Trutoiu, E. J. Carter, I. Matthews and J. K. Hodgins, Modelling and animating eye blinks, ACM Transactions on Applied Perception 2 (2011) 116. 25. A. Weissenfeld, K. Liu and J. Ostermann, Video-realistic image-based eye animation via statistically driven state machines, Vis. Comput. (2009) 116.
C. Ford received his B.Sc. and M.Sc. degrees from the University of Plymouth, in year 2004 and 2005 respectively. From 2005 to 2008, he was an Applications Programmer/Software Tester within industry and at the University of Plymouth, working with numerous blue-chip clients and on the `Mind Cupola' project. Now he holds a position of Ph.D. Researcher at the Faculty of Technology of the University of Plymouth. Christopher Ford is the author of two prior proceedings. His research interests include Human-Robot Interaction, Human-Bahavioural Science and Human Communicative Psychology. He is a member of the IEEE, and was a CoChair of the PCCAT 2012 Conference in the United Kingdom.
G. Bugmann studied Physics at the University of Geneva and received his Ph.D. in Physics at the Swiss Federal Institute of Technology in Lausanne. Now he holds a position as associate professor (Reader) in Intelligent Systems at the Centre for Robotic and Neural Systems of Plymouth University, where he develops human-robot dialogue systems, and investigates computational properties of biological vision and decision making. He contributes to the development of PlymBot, a record-wining humanoid competition robot. He teaches Neural Computation and Natural Language Spoken Interfaces. He manages the MSc in Robotics Technology jointly delivered with Carnegie Mellon University in the USA. He previously worked at the Swiss Federal Institute of Technology in Lausanne, NEC's Fundamental Research Laboratories in Japan and King's College London. Guido Bugmann has three patents and more than 140 publications. He is a member of the Swiss Physical Society, the British Machine Vision Association, AISB, the board of EURON (20042008), the Academic Forum for Robotics (AFRBARA) and the EPSRC peer review college.
1350006-30
Modeling the Human Blink
P. Culverhouse received his B.A. (Biol) from York University (England) and his Ph.D. from Plymouth (England). Now he holds a position as associate professor in the Centre for Robotics and Intelligent Systems at the University of Plymouth. His interests include visual perception, automatic natural object identi¯cation and humanoid robotics. Phil Culverhouse has more than one hundred academic publications, including over forty on natural object recognition. He is co-chair of the international Working Group 130 on automatic plankton identi¯cation, sponsored by SCOR (Scienti¯c Committee on Oceanic Research) and the Royal Society (in the UK).
1350006-31