IRENE: Context Aware Mood Sharing for Social Network Munirul M. Haque, Md. Adibuzzaman, David Polyak, Niharika Jain, and Sheikh I. Ahamed Dept. of Math., Statistics and Computer Science Marquette University Milwaukee, Wisconsin, USA {mhaque, madibuz, dpolyak, njain, iq}@mscs.mu.edu Abstract—Social netw orking sites like Facebook, tw itter, and myspace are beco ming overw hel mingly pow erful media in today’s w orld. Facebook has 500 million active users and tw itter has 190 million visitors per month and increasing each second. On the other hand number of s mart phone users has crossed 45 millions. Now w e are focusing on building an application that w ill connect these tw o revolutionary spheres of modern science that has huge potential in different sectors. IRENE a facial ex pression based mood detection model has been developed by capturing images w hile the users use w ebca m supported laptops or mobile phones. This imag e w ill be analyzed to classify one of several moods. This mood information w ill be shared in the user profile of Facebook according to privacy settings of the user. Several activities and events w ill also be generated based on the identified mood. Keywords-facial expression detection; social networking; mood; eigenface;eigenspac;
I.
INTRODUCTION
Facebook, a social networking website used throughout the world by users of all ages. The website allows users to friend each other and share information resulting in a large network of friends. This allows users to remain connected or reconnect by sharing information, photographs, statuses, wall posts, messages, and many other pieces of information. In the recent years, as smartphones have become increasingly more popular [3], the power of Facebook, has become mobile. Users can now add and see photos, add wall posts, and change their status right from their iPhone or Android powered device. According to statistics [1], 150 million users access Facebook from their mobile devices. Several researches are based on Facial Action Coding System (FACS), first introduced by Ekman and Friesen in 1978 [10]. It is a method for finding taxonomy of almost all possible facial expressions initially launched with 44 Action Units (AU). Computer Expression Recognition Toolbox (CERT) have been proposed [11, 12, 13, 14, 15, 16] to detect facial expression by analyzing the appearance of Action Units related to different expressions. Different classifiers like Support Vector Machine (SVM), Adabooster, Gabor filter, Hidden Markov Model (HMM) have been used alone or in
Lin Liu School of Software Tsinghua University Beijing, China
[email protected]
combination with others for gaining higher accuracy. Researchers in [17, 18] have used active appearance models (AAM) to identify features for pain from facial expression. Eigenface based method was deployed in [19] for an attempt to find a computationally inexpensive solution. Later the authors included Eigeneyes and Eigenlips to increase the classification accuracy [20]. A Bayesian extension of SVM named Relevance Vector Machine (RVM) has been adopted in [21] to increase classification accuracy. Several papers [22, 23] relied on artificial neural network based back propagation algorithm to find classification decision from extracted facial features. Many other researchers including Brahnam et al. [24, 25], Pantic et al. [7, 8] worked in the area of automatic facial expression detection. Almost all of these approaches suffer from one or more of the following deficits: 1) reliability on clear frontal image, 2) out-of-plane head rotation, 3) right feature selection, 4) failure to use temporal and dynamic information, 5) considerable amount of manual interaction, 6) noise, illumination, glass, facial hair, skin color issues, 7) computational cost, 8) mobility, 9) intensity of pain level, and finally 10) reliability. Moreover, there has not been any work regarding automatic mood detection from facial images in social network. We have also done an analysis on the mood related applications of Facebook and a comparison table has been attached in the related work section. Our model is fundamentally different from all these simple models. All these models require users to choose a symbol that represents his/her mood manually but our model detects the mood without user intervention. Our proposed system can work with laptop or hand held devices. Unlike other mood sharing applications currently in Facebook, IRENE does not need manual user interaction to set the mood. Mobile devices like iPhone has front camera which is perfect for IRENE. The automatic finding of mood from facial features is the first of its kind regarding Facebook applications. Thus it gains robustness and also brings several challenges. Contexts like location and mood are considered in case of sharing to increase users’ privacy. Users may or may not like to share their mood when they are in specific location. Again they may think differently about publishing their mood when they are in a specific mood. We have already developed small prototype of the model and showed some screenshots of our deployment. IRENE has the following novelties: 1. First real time mood detection and sharing application in social
network. 2. All current mood applications require users to choose a symbol that represents his/her mood manually but our model detects the mood without user intervention. 3. Computationally inexpensive. 4. IRENE can be used from cell phone thus having mobility. Several related works in modeling mood have been described in section II. It also contains information about current mood based applications in Facebook. Section III explains the importance of the project with scenarios. Our approach has been discussed stepwise in section IV. Characteristics and novelties of the project have been discussed in section V. Section VI deals with evaluation strategies. It also depicts some survey results. Some open questions and proposed future guidelines have been placed in section VII followed by conclusion. II.
RELATED WORK
The related work for mood detection in social networking web sites can be analyzed in two different approaches. One is mood detection and the other is mood sharing in social networking website. For facial expression classification, there have been some recent developments. Pantic and Rothkrantz surveyed the existing facial expression detection algorithms in [2]. The Computer Expression Recognition Toolbox (CERT), developed in University of California, San Diego [11-16], is an automated system that analyzes facial expression in realtime. The present system automatically detects frontal faces in the video stream and codes each frame with respect to 40 continuous dimensions, including basic expressions of anger, disgust, fear, joy, sadness, and surprise. All these models are computationally expensive. Another drawback is that, they have not mentioned intensity level of expressions clearly in their papers.work. Another major work has been done by Ashraf et al. [17, 18]. Here, Active Appearance Model (AAM) has been used to decouple shape and appearance parameters from the digitized face images. Support vector machines (SVM) were used with several representations from the AAM. Using a leave-one-out procedure, they achieved an equal error rate of 19% (hit rate = 81%) for automatic pain detection using canonical appearance and shape features. Temporal information has not been used in recognizing facial expression which may increase the accuracy rate. Authors in [19] proposed an automatic expression detection system using Eigenimage. Skin color modeling has been used to detect the face from video sequence. Mask image technique is used to extract the appropriate portion of the face for detecting pain. Each resultant image from masking is projected into a feature space to form an Eigenspace based on training samples. When detecting a new face, the facial image is projected in the Eigenspace and the Euclidian distance between the new face and all the faces in the Eigenspace is measured. The face that represents the closest distance is assumed as a match for the new image. Later the researchers extended their model [20] to create two more feature spaces – Eigeneyes and Eigenlips. Combination of all three together provided the best result in terms of accuracy (92.08%). Here skin pixels have been sorted out using chromatic color space.
But the skin color varies a lot based on race, ethnicity etc. Detail elaboration of the skin color of the subjects is missing. Rimé et al.[10] showed that the emotional experiences occurred in everyday life create a strong urge of social sharing in human mind. They also showed that in most cases such emotional experiences are shared shortly after they occurred. This clearly indicates that mood sharing is potentially very attractive area through social networks. At present there are many applications in Facebook which claim to do mood detection for the user. Here, we have listed top 10 such applications based on the number of active users. TABLE I.
CURRENT MOOD RELATED APPLICATIONS IN FACEBOK
Name of Application My Mood
No. of users
Mood Categories
3,614,637
56
SpongeBob Mood
126,557
42
The Mood Weather Report
611,132
174
Name and Mood Analyzer
29,803
13
Mood Stones
14,092
-
My Friend's Mood
15,965
-
My Mood Today!
11,224
9
How's your mood today?
6,694
39
Your Mood of the day
4,349
-
Patrick’s Mood
3329
15
As can be seen from the data listed here, each one of these applications is having different number of mood categories. The common factor for all these applications is that the user himself chooses the mood category that he/she thinks is the most suitable one. Once the choice is made by the user, the application gives him the option of sharing the mood with his friends on Facebook. The mood is then displayed on user’s wall. It is encouraging to see that a large number of users are interested in using the mood related application. This is the case when users are manually selecting one of the mood categories. If we give them the mood detection application which works on the principle of image processing and then gives the context aware results instead of just manually selecting one of the categories, it will really prove revolutionary. III.
MOTIVATION
In fact there is no real time mood detection and sharing application in Facebook. Normally, all applications of this sort request the user to select a symbol that represents his/her mood. And that symbol is published in the profile. We
accepted the research challenges concerning real time mood detection system from facial features and use Facebook as a real life application of the system. We chose Facebook due to its immense power of connectivity. Anything can be spread to all including friends, relatives etc. in a matter of seconds. We plan to use this stronghold of Facebook with our application to improve quality of life. A. Scenario 1 Mr. Johnson is an elderly citizen living in a remote place all by himself. Yesterday he was very angry with a banker who failed to process his pension scheme in timely manner. But as he chose not to share his angry mood with others while using IRENE that information was not leaked. But today he started with a pensive mind and feeling depressed. His grandchildren were in his friend list of Facebook. They noticed the depressed mood. They called him and send him an online invitation card in a restaurant which is the most favorite of his grandfather using event manager. Within hours they can see the mood status to be changed to happy. B. Scenario 2 Dan, a college student has just received his semester final result online. He felt upset since he did not receive the result he wanted to achieve. IRENE has taken his picture and classified it as sad. The result has been uploaded in Dan’s Facebook account. Based on Dan’s activity and interest profile the system will suggest for a recent movie of Tom Hanks – Dan’s favorite actor. It will also show the local theater name and timing of the movie. Dan’s friends also see it and they all decide to go for the movie. By the time they return from the movie, everyone is smiling including Dan. C. Scenario 3 Mr. Jones is working as a system engineer in a company. He is a bit excited today because the performance bonus will be announced today. But it was a shock for him as he received a very poor bonus. His sad mood has been detected by IRENE. But as Mr. John is in office, it would not publish his mood due to its location aware context mechanism. IV.
OUR APPROACH
Development of the architecture can be classified in three broad categories. In the first phase the facial portion has to be extracted out from an image. Then in the second phase the extracted image has to be analyzed for facial features and classification follows to identify one of the several mood categories. Finally we need to integrate the mobile application with Facebook. Figure 1 depicts the overview of the architecture. A. Face Detection Pixels corresponding to skin have difference with other pixels in an image. Skin color modeling in chromatic color space [5] has shown the clustering of skin pixels in a specific region. Though the skin color of persons vary widely based on different ethnicity, research [4] shows that still they form a cluster in the chromatic color space. After taking the image of the subject we first crop the image and take only the head portion of the image. Then we use skin color modeling for extracting the required facial portion from the head image.
Figure 1. Architecture of Facebook integration
B. Facial Expression Detection For this part we use a combination of Eigenfaces, Eigeneyes, and Eigenlips methods based on Principal Component Analysis (PCA) [6,7]. This analysis method includes only the characteristic features of the face corresponding to a specific facial expression and leaves other features. This strategy reduces the amount of training sample and helps us to make our system computationally inexpensive which is one of our prime goals. These resultant images are used as samples for training Eigenfaces method and M Eigenfaces with highest Eigenvalues will be sorted out. We generate the Eigenspace as follows: •
The first step is to obtain a set S with M face images. Each image is transformed into a vector of size N2 and placed into the set, S={Г1,Г2,Г3.. ,ГM}
•
Second step is to obtain the mean image Ψ
•
Ψ
•
We find the difference Φ between the input image and the mean image, Φi= Гi- Ψ
•
Next we seek a set of M orthonormal vectors, uM, which best describes the distribution of the data. The kth vector, uk, is chosen such that
∑
Г
1
λ •
Φ
λk is a maximum, subject to u
δ
⎧1 , if l = k ⎨ ⎩0 , otherwise
where uk and λk are the eigenvectors and eigenvalues of the covariance matrix C •
The covariance matrix C has been obtained in the following manner
C •
∑
Φ Φ
Φ Φ Φ ..Φ
Where,
To find eigenvectors from the covariance matrix is a huge computational task. Since M is far less than N2 by N2, we can construct the M by M matrix, L
, where L
Φ Φ
•
We find the M Eigenvectors, vl of L.
•
These vectors (vl) determine linear combinations of the M training set face images to form the Eigenfaces ul u
•
l
1,2, … , M
After computing the Eigenvectors and Eigenvalues on the covariance matrix of the training images
•
v Φ ,
M eigenvectors are sorted in order of descending Eigenvalues Some top eigenvectors are chosen to represent Eigenspace
Project each of the original images into Eigenspace to find a vector of weights representing the contribution of each Eigenface to the reconstruction of the given image.
When detecting a new face, the facial image is projected in the Eigenspace and the Euclidian distance between the new face and all the faces in the Eigenspace is measured. The face that represents the closest distance will be assumed as a match for the new image. Similar process is followed for Eigenlips and Eigeneyes methods. The mathematical steps are as follows: •
Any new image is projected into Eigenspace and find the face-key ω
Г
Ψ
ω ,ω ,….,ω
Ω
where, uk is the kth eigenvector and ωk is the kth weight in the weight vector Ω ω ,ω ,….,ω •
•
The M weights represent the contribution of each respective Eigenfaces. The vector Ω, is taken as the ‘face-key’ for a face’s image projected into Eigenspace. We compare any two ‘face-keys’ by a simple Euclidean distance measure €
•
Ω
Ω
An acceptance (the two face images match) or rejection (the two images do not match) is determined by applying a threshold.
C. Integration to Facebook An application has been developed for this purpose. The application uses device camera to capture facial images of the user and recognize and report their mood. The mobile version, for iPhone and Android powered devices, uses the devices’
built in camera to capture images and transfer the image to a server. The server extracts the face, the facial feature points, and classifies the mood using a classifier, and sends the recognized mood back to the mobile application. The mobile application can connect to Facebook allowing the user to publish their recognized mood. Depending upon the recognized mood and location of the user, the server will also send context aware advertisements to mobile application. The desktop version will allow users to use their PC’s webcam to capture images and send to the server through a web application. The application uses JavaScript and Flash programming to control the PC’s webcam. Similar to the mobile application, the data are sent to the server and the server will detect the user’s mood. The recognized mood will be sent back to the web application, and the user can connect to Facebook and publish it. V.
CHARACTERISTICS
Our application has several unique features with research challenges compared to other such applications. Several important functionalities of our model have been described here. A. Real Time Mood for Social Media Several researches have been done in facial expression detection. But the idea of real time mood detection and integrate this with Facebook is a novel idea. The use of ‘extreme connectivity’ feature of Facebook will help people to distribute their happiness over others at the same time this feature will help people to overcome their sadness, sorrows, depression etc. Thus it will improve the quality of life. With the unbelievable amount of Facebook users and its current growth, it will have a real positive impact on the society. B. Location Aware Sharing Though people like to share their moods with friends and others there are scenarios when they do not want to publish their mood when they are in specific location. For example if someone is in office and in depressed mood since he had an argument with his boss, definitely he would not like to publish his depressed mood. If that information is published he might be in a false position. Our model will take location as a context to analyze the publishing decision. C. Mood Aware Sharing There are moods that people like to share with everyone and there are moods which people do not like to share. For example, one might like to share all moods but anger to everyone. Or he might like to share specific groups of mood with special people. Someone may want to share only happy face with kids (or people below a specific age) and all available moods with others. All these issues have been taken care of. D. Mobility Our model will work for both fixed and handheld devices. This will ensure the feature of mobility. Two different protocols are being developed for connecting the web server from fixed devices (laptop, desktop etc.) and handheld devices (cell phone, PDA) etc. A recent statistics showed that around
150 million active users access Facebook from their mobiles [1]. This feature will help the users to access our mood based application on the fly. E. Resources of Behavioral Research An appropriate use of this model can provide huge amount of user certified data for the use of behavioral scientist. Right now billions of dollars are being spent on many projects that are trying to improve quality of life. In order to do that behavioral scientists require to analyze the mental state of different age groups, their likes and dislikes with other issues. Statistical resources about the emotion of millions of users could provide them with invaluable information to reach a conclusive model. F. Context Aware Event Manager Event manager suggests of events that might suit with the current mood of the user. It will work as a personalized event manager for each user by tracking the previous records of the user. When it is going to suggest about some activity for making a person happy, it will try to find out whether there was any special event that made the person happy before. This feature would enable the event manager to be more specific and relevant to someone’s personal wish list and likings. VI.
Initially we also took pictures for the expression ‘depression’. But we even failed to distinguish between the expression of ‘sadness’ and ‘depression’ with naked eyes. So we later discarded images of ‘depression’ from database. While building the database we tried to cover all possible different aspects including gender, lighting condition, background etc. We also used images containing beard and glasses. Our database has pictures of male and female of more diverse ethnicity (White, African-American, Caucasian, and Hispanic). After training, we implemented the client side for web service call using PHP and javascript. In the client side, we upload an image and then using the web service call, the expression is detected. In the server side we used Apache Tomcat container as the application server with Axis2 SOAP engine. Then using a PHP script we called that web service from a browser. User uploads a picture from the browser and then the facial expression is detected using the web service call. Figure 3 shows the high level architectural overview.
EVALUATION
We have implemented a prototype of the proposed model and also collected user feedback on this issue. A. Implementation of the Prototype We have built the first prototype of IRENE. The ultimate goal is to use the application from any device, mobile, or browser. Because of the huge computational power needed for the image processing for facial expression, we needed software like MATLAB for image processing and facial expression recognition. Hence the total design can be thought of the integration of three different phases. First, we need MATLAB for facial expression recognition. Second, that MATLAB script needs to be called using a web service. That way, we ensure that the script is available from any platform, including handheld devices. Lastly, we need a Facebook application which will call the web service. First we made a training database with six basic expressions. The expressions are anger, fear, happiness, neutral, sadness, and surprise. The following figure 2 is a screenshot of the training database.
Figure 3. Web based expression detection architecture.
We have deployed IRENE in Facebook as test application and collected some screenshots. Figure 4 shows the capturing of the image that is to be analyzed for mood.
Figure 4. Face Detection. Figure 2. Facial Expression Training Database.
After detecting the mood of the person, the Facebook application displays the related advertisements along with the mood. These advertisements match the personal likings of the user profile. In Figure 5, the related advertisements are shown for the detected mood and location.
charts. All most all the answers show that Facebook is standing on pupils’ trust on friends. This connectivity and reliability facts can be utilized as possible solution to many issues including depression, dementia, isolation etc. When asked about their opinion as to why people would like to use this application, highest 52% of the people believe that the main reason behind using such application is ‘Just for Fun’. Figure 7 summarizes the response result. 12%
52%
4% To seek relevant help To gain their friends' attention To help provide data for research purpose Just for fun
Figure 5. Advertisement with respect to mood.
In figure 6, the user has been shown to share recognized mood with his/her friends by publishing the mood.
32%
Figure 7. Users’ perception on reasoning of mood detection.
The group of survey takers has quite a varied opinion for who would they like to share the application results with. Amidst of so many talks regarding privacy of Facebook applications this question was specifically significant. Even with all concerns, very high number of people (51%) wanted to share the result with “Only Friends”. Only 1% people showed their interest in sharing it with distant peoples like ‘Friends of Friends’. The following figure 8 shows the result. 15%
10%
23%
Figure 6. Invite friends using IRENE.
B. Measurement of Performance We conducted a survey among a group of 127 people of different educational background and age ranging from 18 to 35 to get the user perception on the application that we are trying to build. The male participation in this survey was 58% and female participation was 42%. When we asked the question whether they would like to try a new application which detects the mood using facial expressions, 53% of the total survey takers said “Yes”, while 26% said “Not Sure” and remaining 21% said “No”. Percentage of males interested in using this application is more than the female percentage. We had 11 questions in the questionnaire. The results of the most significant questions have been summarized in the following
51% 1% Everyone Friends of Friends None
Only Friends Selective people from friend list
Figure 8. Users’ perception on mood sharing.
We also asked the survey takers to choose the activities that are of major interest to them and hence can help cheer them up on a bad day. “Having lunch/dinner with a friend” and “Watching a movie” topped the charts with 27% and 26% respectively. This answer was used in developing the personalized event manager. Figure 9 shows the interest level in various activities by gender.
(occurrence/non-occurrence). Current literature has failed to identify the intensity level of facial expressions. We plan to incorporate this feature in future.
35% 30% 25% 20% 15% 10% 5% 0% Online chat Lunch with friend
Watching Movie
Male
Going to an Online game Event
Other
Female
Figure 9. Interest level of people in different activities.
The overall result of the survey was very satisfying as many people showed their interest in trying the application and also provided us the useful data for application enhancements. VII. CRITICAL & UNRESOLVED ISSUES Mood detection from facial expression is a complex issue with the inclusion of several dynamic features like timing, duration, intensity etc. Here we point out some of these unsolved features. A. Deception of Expression(Suppression, Amplification, Simulation) The volume of control over suppression, amplification, and simulation of a facial expression is yet to be sorted out while determining any type of automatic facial expressions. Galin and Thorn [37] worked on the simulation issue but their result is not conclusive. In several studies researchers obtained mixed or inconclusive findings during their attempts to identify suppressed or amplified pain expression [37, 38]. We plan to test our model considering these issues. B. Difference in Cultural, Racial, and Sexual Perception The facial expressions are clearly different in people of different races and ethnicities. Culture plays a major role in our expression of emotions. Culture dominates the learning of emotional expression (how and when) from infancy and by adulthood that expression becomes strong and stable [31,32]. All current mood detection models use the same algorithm for men and women while research shows [29,30] notable difference in the perception and experience of mood and expression between the genders. Fillingim [28] believed this occurs due to biological, social, and psychological differences in the two genders. This gender issue has been neglected so far in the literature. C. Intensity According to Cohn [34] occurrence/non-occurrence of facial features, temporal precision, intensity, and aggregates are the four reliabilities that are needed to be analyzed for interpreting facial expression of any emotion. Most researchers including Pantic and Rothkrantz [2], Tian, Cohn, and Kanade [33] have focused on the first issue
D. Dynamic Features Several dynamic features including timing, duration, amplitude, head motion, and gesture play an important role in the accuracy of emotion detection. Slower facial actions appear more genuine [36]. Edwards [35] showed the sensitivity of people to the timing of facial expression. Cohn [34] related the motion of head with a sample emotion ‘smile’. He showed that the intensity of a smile increases as the head moves down and decreases as it moves upward and reaches its normal frontal position. These issues of timing, head motion, and gesture have been neglected that would have increase the accuracy of facial expression detection. We plan to count these features along with facial image analysis. VIII. CONCLUSION Here we have proposed real time mood detection and sharing application for Facebook. The novelties and its impact on the society have been described. Several features of context awareness have made this application unique compared to other applications of this kind. A customized event manager has been incorporated for suggestion based on user mood which is a new trend in current online advertisement strategy. A survey result has also attached to show the significance and likeliness of such application among the population. Currently we are working to incorporate intensity level of the facial expressions along with the detected mood. There are still several open issues as discussed before. Along with those issues the biggest concern of any Facebook application is privacy. There is no Facebook application that appropriately handles user privacy. Facebook also avoid its responsibility by putting the burden on developer’s shoulder. We plan to corporate this issue especially location privacy issue in our extended model. We have already shown some initial work on privacy area in [26-27]. We also plan to delve other mood detection algorithms to find the most computationally inexpensive and robust method for Facebook integration. ACKNOWLEDGEMENT IRENE means “peace” in Greek. It may refer to: Eirene (Greek goddess). We thank Imran Reza for selecting such a meaningful name. We acknowledge the initial contribution of our lab member Tanvir Zaman in this project. REFERENCES [1] [2]
[3] [4] [5]
http://www.Facebook.com/press/info.php?statistics Pantic, M. and Rothkrantz, L.J.M. Automatic analysis of facial expressions: the state of the art. In Pattern Analysis and Machine Intelligence, IEEE Transactions,Volume 22, issue 12, 1424 – 1445 http://metrics.admob.com/ Gökalp, D. Skin color based face detection. Department of Computer Engineering, Bilkent University, Turkey. Wyszecki, G. and Styles, W.S. 1982. Color science: concepts and methods, quantitative data and formulae. John Wiley & Sons, second edition, New York.
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20] [21]
Turk, M. A. and Pentland, A. P. 1991. Face recognition using Eigenfaces. In the proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June, 586-591. Pantic, M. and Rothkranz, L.J.M. 2000. Expert System for automatic analysis of facial expressions. In Image and Vision Computing, 2000, 881-905 Pantic, M. and Rothkranz,, L.J.M. 2003. Toward an affect sensitive multimodal human-computer interaction. In Proceedings of IEEE, September 1370-1390 Rimé, B., Finkenauera, C., Lumineta, O., Zecha, E., and Philippot, P. 1998. Social Sharing of Emotion: New Evidence and New Questions. In European Review of Social Psychology, Volume 9. Ekman, P. and Friesen, W. 1978. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, CA. Littlewort, G., Bartlett, M.S., and Lee, K. 2006. Faces of Pain: Automated measurement of spontaneous facial expressions of genuine and posed pain. In Proceedings of the 13th Joint Symposium on Neural Computation, San Diego, CA. Smith, E., Bartlett, M.S., and Movellan, J.R. 2001. Computer recognition of facial actions: A study of co-articulation effects. In Proceedings of the 8th Annual Joint Symposium on Neural Computation. Bartlett, M.S., Littlewort, G.C., Lainscsek, C., Fasel, I., Frank, M.G., and Movellan, J.R. 2006. Fully automatic facial action recognition in spontaneous behavior. In 7th International Conference on Automatic Face and Gesture Recognition, 223-228. Bartlett, M., Littlewort, G., Whitehill, J., Vural, E., Wu, T., Lee, K., Ercil, A., Cetin, M., and Movellan, J. 2006. Insights on spontaneous facial expressions from automatic expression measurement. In Dynamic Faces: Insights from Experiments and Computation, Giese,M. Curio, C., Bulthoff, H. (Eds.), MIT Press, 2006. Littlewort, G., Bartlett, M.S., and Lee, K. 2009. Automatic Coding of Facial Expressions Displayed During Posed and Genuine Pain. In Image and Vision Computing, 27(12), 1741-1844. Braathen, B., Bartlett, M.S., Littlewort-Ford, G., Smith, E. and Movellan, J.R. 2002. An approach to automatic recognition of spontaneous facial actions. In Fifth International Conference on Automatic Face and Gesture Recognition, 231-235. Ashraf, A. B., Lucey, S., Cohn, J. F., Chen, T., Prkachin, K. M. and Solomon, P. E. 2009. The painful face II-- Pain expression recognition using active appearance models. In International Journal of Image and Vision Computing, 27(12):1788-1796. Ashraf, A. B., Lucey, S., Cohn, J. F., Chen, T., Ambadar, Z., Prkachin, K., Solomon, P., and Eheobald B. J. 2007. The Painful Face - Pain Expression Recognition Using Active Appearance Models. In ICMI. Monwar, M., Rezaei, S. and Prkachin, K. 2007. Eigenimage Based Pain Expression Recognition. In IAENG International Journal of Applied Mathematics, 36:2, IJAM_36_2_1. (online vversion available 24 May 2007) Monwar, M., Rezaei, S. 2006. Appearance-based Pain Recognition from Video Sequences. In IJCNN, 2429-2434 Gholami, B., Haddad, W. M., and Tannenbaum, A. 2010. Relevance Vector Machine Learning for Neonate Pain Intensity Assessment Using Digital Imaging. In IEEE Trans. Biomed. Eng., Note: To Appear
[22] Monwar, M., Rezaei, S. 2006. Pain Recognition Using Artificial Neural Network. In IEEE International Symposium on Signal Processing and Information Technology, Vancouver, BC, 28-33. [23] Raheja, J. L., and Kumar, U. 2010. Human facial expression detection from detected in captured image using back propagation neural network. In International Journal of Computer Science & Information Technology (IJCSIT), Vol. 2, No. 1, 116-123. [24] Brahnam, S., Nanni, L., and Sexton, R. 2007. Introduction to neonatal facial pain detection using common and advanced face classification techniques. In Stud. Comput. Intel., vol. 48, 225–253. [25] Brahnam, S., Chuang, C.-F., Shih, F., and Slack, M. 2006 Machine recognition and representation of neonatal facial displays of acute pain. In Artif. Intel. Med., vol. 36, 211–222. [26] Farzana Rahman, Md. Endadul Hoque, Ferdaus Ahmed Kawsar, and Sheikh Iqbal Ahamed, Preserve Your Privacy with PCO: A Privacy Sensitive Architecture for Context Obfuscation for Pervasive ECommunity based applications, to appear in The Second IEEE International Conference on Social Computing (SocialCom-2010). [27] Nilothpal Talukder, Sheikh Iqbal Ahamed, Preventing Multi-query Attack in Location-based Services, Proceedings of the Third ACM Conference on Wireless Network Security (WiSec), March 2010, pp. 2536. [28] Fillingim, R. B., 2000 Sex, gender, and pain: Women and men really are different. In Current Review of Pain 4, pp 24–30. [29] Berkley, K. J. Sex differences in pain, Behavioral and Brain
Sciences 20, pp 371–80.
[30] Berkley, K. J. & Holdcroft A.Sex and gender differences in
pain. In Textbook of pain, 4th edition. Churchill Livingstone.
[31] Malatesta, C. Z., & Haviland, J. M. 1982. Learning display
[32]
[33]
[34]
[35]
[36]
[37]
[38]
rules: The socialization of emotion expression in infancy.In Child Development, 53, pp 991-1003. Oster, H., Camras, L. A., Campos, J., Campos, R., Ujiee, T., Zhao-Lan, M., et al. 1996. The patterning of facial expressions in Chinese, Japanese, and American infants in fear- and angereliciting situations. Poster presented at the International Conference on Infant Studies, Providence, 1996,RI. Tian, Y., Cohn, J. F., & Kanade, T.2005. Facial expression analysis. In S. Z. Li & A. K. Jain (Eds.), Handbook of face recognition, 2005, pp. 247-276. New York, New York: Springer. Cohn, J.F.2007. Foundations of human-centered computing: Facial expression and emotion. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’07),2007, Hyderabad, India. Edwards, K..1998 The face of time: Temporal cues in facial expressions of emotion. In Psychological Science, 9(4), 1998, pp-270-276. Krumhuber, E., & Kappas, A.2005 Moving smiles: The role of dynamic components for the perception of the genuineness of smiles. In Journal of Nonverbal Behavior, 29, 2005, pp-3-24. Galin, K. E. & Thorn, B. E.1993. Unmasking pain: Detection of deception in facial expressions. In Journal of Social and Clinical Psychology , 12, pp 182–97. Hadjistavropoulos, T., McMurtry, B. & Craig, K. D.1996. Beautiful faces in pain: Biases and accuracy in the perception of pain.In Psychology and Health 11, 1996, pp 411–20.