Affect analysis is required in various applications such as mood identification, evaluation of learning disorders, product reviews, and blog/forum opinion analysis.
International Journal of Research in Computer and Communication Technology, Vol 2, Issue 11, November- 2013
ISSN (Online) 2278- 5841 ISSN (Print) 2320- 5156
Issues and Dependencies on Affect Data Models For Assessment of Disorders- A Survey
Rekha S. Sugandhi, and Anjali R. Mahajan Abstract—Analysis of free-form data has always been a challenge. In that, data depicting affects in hidden form and/or in various modes complicates the task of mining to identify and analyze relevant affects from the input. Affect analysis is required in various applications such as mood identification, evaluation of learning disorders, product reviews, and blog/forum opinion analysis. This paper focuses on identification of affect parameters and related issues in design of cognitive systems for diagnosis and evaluation of psychological disorders. The paper presents a survey on the various uni- and multi-modal affect analysis systems designed for similar applications. Index Terms—Affective computing, Multimodal affect recognition, sentiment analysis, affect mining, psychoanalysis. I. INTRODUCTION FFECTS refer to various human traits of not A only emotional states but also feelings, moods, knowledge, beliefs and intentions. There are a few systems that do recognize the human affects from text data such as discussion forums, blogs and simple text and, in some cases, speech. But apart from identifying them, it is also often necessary to analyze them further for necessary actions. Knowledge about these aspects can be used to improve Human-Computer interactions, and help machines understand human behavior so that this knowledge can be used to give personalized www.ijrcct.org
experience to the user [1, and 2]. Affect recognition/analysis systems assume certain predefined data models with specific parameters on which the mining and machine learning algorithms are designed. Due to lack of objectivity, evaluation of sentiments is not straight forward [3]. Secondly, information depicting moods, emotions or affects could be deliberately hidden or may not be too obviously displayed by a hesitant patient (or subject). Thirdly, the system performance depends on the variations in modes (text, speech, image, and videos) for input data. Such systems are called as multi-modal systems. Previous research says that human emotions are major role players for perceptions, memory management and attention [4]. In the area of artificial intelligence (AI), affects and their analysis are exploited for applications such as mood identification, evaluation of learning disorders, product reviews, and summarizing opinions shared on blogs and discussion forums. For applications in treatment of psychological or mental disorders, more specific design is needed to get obtain diagnostic outcomes. Some applications of use of affects in mental/psychological disorders are, analysis of a patient’s mental health, understanding learning disabilities or mental health problems in children and, detecting malicious hidden messages (e.g. codes in languages used in anti-social or terrorism-related messaging). All the varying applications try to achieve a common goal to enable computers to take human-like decisions using a cognitive approach. Page 1189
International Journal of Research in Computer and Communication Technology, Vol 2, Issue 11, November- 2013
II. ISSUES IN AFFECT ANALYSIS A. Scarcity of Relevant Data There is a large density of data available on the web that prove very resourceful for web mining application irrespective of the requirement type, namely text data or multimedia inputs. As far as intelligent systems design for general mining applications are concerned unlimited data on the web proves to be a blessing, both for training and testing the performance of the implemented algorithm. This usually does not hold true for affect mining for applications related to psycho analysis for the following reasons. -- availability of databases in electronic form for analysis, in general is very difficult to obtain due to data confidentiality issues and difficulty in ensuring security of sensitive patient mental health records. --datasets from different sources have varying formats and parameter descriptions. Further the parameter value ranges need to be normalized to be combined to form larger datasets. -if at all, data in electronic form is made available, the validity of affect data values is not guaranteed to be always correct. Collection of valid data for training and testing of the affect analysis algorithm is a tedious task in itself that cannot be done without the pro-active involvement of the domain expert (psychiatrist in this case). Data collection is a difficult task often due to high sensitivity and legal issues of personal data of psychiatric patients. The subjects may elude or manipulate facts from the data collector, than can further affect the performance and accuracy of the analysis algorithm due to missing data issues. Most of the medical institutes dealing with the treatment of psychiatric affects need to follow strict patient confidentiality pacts. In such cases, providing the patient data and medical history for external or independent research may be difficult www.ijrcct.org
ISSN (Online) 2278- 5841 ISSN (Print) 2320- 5156
without causing breach of confidentiality. Thus lack of real data and real patient parameters lead to incomplete, inaccurate and nonsensical data analysis, in which the automated diagnosis holds no strength. As a result of unavailability of large real-value databases, often researchers working on affect models are required to generate their own datasets with some voluntary assistance from a domain expert or psychiatrist [5]. It is not an effective practice since, for training and testing an affect analysis system, large databases will have to be generated and more importantly, the parameters and their values for the enormous input data requirement will have to be assumed. This is not only an exhausting task but also causes many difficulties in validating the data due to its volume. The large volume of generated data is error-prone and data collection or generation could take a lot of time. The bias caused due to data generation through artificial means (selection of variable value using randomization) will not be able to justify the performance of the algorithm designed for the related affect model. B. Missing and Redundant Data Electronic Health Records (EHRs) have enormous records of patients’ data differing in parameters. For algorithms designed to analyze the data values, it is needed to know in advance as to which parameter needs to be used, manipulated, or pruned for the further outcome processing. Often all the related patients’ data might have different forms and structures , for instance facial images, samples of speech, family history in unstructured text form, etc.(explained in Section III.C Multimodal Systems) [6]. A major issue with health records is that of missing data values in records or incomplete data. This occurs due to varied or random frequencies of data recording time, misreporting or loss of Page 1190
International Journal of Research in Computer and Communication Technology, Vol 2, Issue 11, November- 2013
ISSN (Online) 2278- 5841 ISSN (Print) 2320- 5156
recording due to human errors. Patient data needs imputed. In order to recover the missing values, to be recorded at certain predefined intervals and they are classified as recoverable or notdepending on the progress of the recovery, the recoverable missing data values. Only the variables intervals or frequencies may be dynamically classified as recoverable ones are imputed using changed. Doing this leads to data recording at techniques based on classification like fuzzy different time intervals. Not only the data values modeling or support vector machines. but also their time and interval of recording affect The second issue is that of recording the the analysis outcomes. Errors or precision losses different parameters at different times. This leads occurring during the recording of measured data- to misalignment of data values of related values could also contribute to biased analysis parameters being stored in the database. In cases results. This is much more probable in the of records containing misaligned data, the prerecordings of observation of subjects during processing of stored data becomes mandatory. psycho-analysis, reason being that affective cues Certain conventional methods have been employed require close observation and personal to align the data values to a fixed temporal scale. interpretation as compared to physiological One of these methods is called gridding that takes symptoms that can be accurately measured with a reference temporal grid and shifts all variables to the help of specific monitoring devices. Since the reference grid. Gridding necessarily causes affects have to be detected by software or human altering the values of all the variables to an aligned judgment and observation, the precision in base using uniform re-sampling and cubic recording may vary from person to person interpolation of all the variable values. (psychoanalyst, in this case), depending on the The other method for alignment is called as experience and minute observation skills. templating that takes the misaligned data value and Patient records are preferably recorded at a transforms (aligns) it with respect to a reference frequency based on the normal heart rate. But time of another related data value in the same some physiological and most psychological record. Templating is done based on some parameters are infrequent in occurrence in which estimation using sampling, aggregation and case recording of those values as per the heart rate interpolation. would add many redundant values in the records, F.Cismondi et.al. [5] describe a method to find making it bulky without adding too much the missing data values, the pattern of missinginformation. ness needs to be classified. The following steps For data recorded by multiple observers for have been implemented: analysis, there may be chances of differences in 1. Alignment of misaligned data- using observations by the participants. The differences gridding and templating. may be caused due to various possible reasons like 2. Statistical classification for differentiating context unawareness, difference in focus levels between data that has been inadequately during the observation and recording process, lack collected and actual missing data of adherence to common protocols for marking the 3. Classification of data into recoverable and observed gestures or affects, etc. [7]. not-recoverable categories to find missing Considering the conventional techniques of values imputing or removing the missing value records,All The data-values classified as recoverable are rather than deleting the records with some missing imputed by averaging out the sample data-values values, these values in the databases can be on each side of the missing data value position. www.ijrcct.org
Page 1191
International Journal of Research in Computer and Communication Technology, Vol 2, Issue 11, November- 2013
The average is assumed to be approximate, since patient health data usually show gradual variations in most of the parameters in temporal domain. Data sets used in the testing, has been randomly created. The algorithm has not been tested on actual data. So its validity and predictive outcome may not be realistic. C. Hidden Meaning in Data The capabilities of affect identification in human beings cannot always be simulated on a machine with exactly the same precision and cognizance. Complex emotions like deception and sarcasm are indicated through symptoms that are contrary to it. These may be identified easily by humans, but for learning algorithm to be able to it, certain historical data and the appropriate context needs to be fed to the system. Some human reactions and affects vary in valence depending on their culturally-oriented affect behavior systems [8]. The analyses at such times have to be tuned or done selectively to give the expected outcome. For instance, East Asians are less expressive and have lower valence as compared to their western counterparts [7]. The data input to affect processing models may be done through text, speech or gestures. When speech input is fed to the affect analysis program, feature extraction from the processed speech signals can be done by simple non-cognitive machine learning algorithms. Given the set of prosodic speech parameters and their threshold ranges, converting the speech signals to affects can be easily done using various methods. But a similar straightforward approach may not sufficiently work well for gestures or text input modes. Videos containing body language, gestures and facial expressions are an important source of information to affect recognition systems. But the implemented algorithm needs to be intelligent enough to relate the gestures to the exact affect. False pretensions and redundant gestures often www.ijrcct.org
ISSN (Online) 2278- 5841 ISSN (Print) 2320- 5156
lead to misinterpretation of affects by the machine. Human beings possess better cognizance that is required to correctly differentiate between minute variations in gestures. To simulate similar capabilities in machines, first of all, the gestures have to be identified in sequence of time and image gestures need to be interpreted in numeric or text form. This in itself is an extremely difficult task, since videos may contain more than one combinations of gesture details, for instance, the gesture could include droopy eyes and vertically inclined sitting posture, both of which may later be aggregated as “tiredness or fatigue” as the outcome of the affect recognition system causing indeterminism. Text inputs seem simpler to analyze for affect recognition, but such a seemingly simpler task may get complex due to many similar looking words having multiple senses and meanings. Also the text may contain presence of indirect words like those expressing sarcasm which might mean otherwise if the context is not taken into account while deriving the exact word sense [2]. D. Design Issues Due to Varied Structure of Input Data Having a generalized algorithm design may not be equally applicable for all data models of different modalities. Survey indicates that body affects are more appropriate form of input to affect models as compared to facial expressions. If the algorithms for body affects are to be implemented, then a methodology to record or gather the parameters will first have to be devised. The major body affects parameters are valence, arousal and, dominance, which are basically single-dimensional values. These values are later combined in multiple dimensions to represent combinations of body affects representing a wide variation of the depicted affects. For instance some combinations taken are: scales on valence Page 1192
International Journal of Research in Computer and Communication Technology, Vol 2, Issue 11, November- 2013
versus arousal (activation), horizontal (lateral) versus vertical body affects, leaning and inclined positions, active versus passive (non-acted or static) [7]. Some other variations in parameter values could be more active versus less active affects, speed of action, etc. As far as storage and representation of body affects is concerned, some design issues are: 1. Attaching context with actual observations (and its correctness) 2. Different affects having similar body cues and hence leads to mis-classification (e.g. fear and anger; joy and anger; joy and grief) 3. Instead of having different representations for different actions, can a more concise representation be designed that can be tuned for smaller variations i.e. action-independent model? 4. How can neutral motory actions be distinguished from ones with expressions, so that they may be ignored? While designing affect recognition systems for different modalities, for instance, videos, images and speech, an aggregation of all the captured affects in temporal domain within the context frames need to be considered. III. DEPENDENCIES AND SUPPORT FOR AFFECT MODELS A. Affect Lexicon Support Affect models need to rely on the support of a knowledge base that consists of a dictionary or lexicon of words that relate, strongly or weakly, to human affects. This lexicon contains descriptive information on the affect words that specify the word properties in terms of its valence (positivity, neutrality or negativity) and, arousal or intensity level related to the word reference. Advanced affect lexicons may contain mappings to www.ijrcct.org
ISSN (Online) 2278- 5841 ISSN (Print) 2320- 5156
antonymous and synonymous words in the same database. The identification of correct affects greatly depends on the level of word information and other related information stored in the lexicon. Set of emotions of different degrees and intensities are mostly defined in some form of taxonomy or ontology of emotions in lexicons. Some such examples are SentiWordNet (a variant of Princeton WordNet1), HowNet and, General Inquirer. The SentiWordNet lexical database contains synsets from WordNet that are assigned triplet scores of positivity, negativity and neutrality all three of which sum up to 1 [9]. There has been some progress in generation of Chinese sentiment word lexicon with creation of lexical databases like HowNet and some taxonomy built on its architecture. HowNet is a network of Chinese word concepts, just like WordNet for English, in which the concepts form nodes in a network that are defined by inter-conceptual and inter-attribute relation among words. Certain events in the form of verb descriptions are defined for each entry. The HowNet concept network has been used by a few researchers for generation of taxonomies for the affect words [10, and 11]. But, it is restricted to affective computing systems only if the source language is translated into Chinese. The taxonomical form of categorization of emotion words also restricts the representation of semantic details of affect data. Generally vectors or tree-structured approaches are used for the taxonomy representation. Few works refer to domain-oriented lexicon generation that is used for feature-based emotion extraction like product reviews and assessment of customer satisfaction level [10, 12 and 13]. A. Neviarouskaya et.al describe the construction of a lexicon of sentiment words called SentiFuldefined on the basis of polarity scores and measures based on whether the word is 1
WordNet is available from http://wordnet.princeton.edu.
Page 1193
International Journal of Research in Computer and Communication Technology, Vol 2, Issue 11, November- 2013
sentimentally positive, negative or neutral [14]. The basic sentiment words are extracted from the common word lexicon- WordNet. These basic words are then extended to their antonyms, synonyms, hyponyms, derivatives and compounds. The inclusion of the sentiment words into SentiFul, is based on the positive and negative polarity scores of the shortlisted words. An incremental method for the construction and evaluation of the lexicon has been used here. First the core of SentiFul is constructed from the words in the Affect database (the author’s earlier creation). Here on the basis of nine basic emotion words, the other words are represented as vectors of emotional state intensities with values in range from 0.0 to 1.0. The polarity score and the polarity weights for each word is then calculated and assigned. This core lexicon is then extended by adding new words of the following categories, after they are evaluated for the polarity scores and weights: Synonyms of existing words Direct Antonyms of existing words Hyponyms of existing words Derivatives of existing words and, Compound words consisting of existing words. All the evaluations of the word polarities are based on the intensities defined in the Affect database.
ISSN (Online) 2278- 5841 ISSN (Print) 2320- 5156
Figure 1: Taxonomy of Various Forms of Affect Data B. Uni-Modal Systems Text Based Analysis of product reviews and blog opinions majorly involve text processing of web data and other similar sources. Bias propagation of user data on a network can be done with the help of a data structure called the Opinion Agreement Graph. The Opinion Agreement Graph is constructed on the basis of similarity measure calculated from the content words taken text-based social media channels. A random-walk algorithm is then executed on the generated graph to compute the bias in user opinion on a given topic. This method assumes a topic-based approach for the data analysis [15]. Limited matter in the text input restricts the quantity of information that is required for accurate affect recognition. Speech Based Speech information can be categorized as linguistic (about the content of speech in words) or para-linguistic (about the way in which the speech was made). The para-linguistic features are often referred to as prosodic speech signals. A few sentiment analysis systems take as input the sample speech of the user and perform prosodic processing of the speech signals. [16] takes the speech signal as input, converts it into text and
www.ijrcct.org
Page 1194
International Journal of Research in Computer and Communication Technology, Vol 2, Issue 11, November- 2013
then, only considers the text content for the affect analysis. The feature extraction is performed on the text review and adjective-noun pairs are identified as potential sentiment phrases. Three configurations of audio ASR and feature extraction are considered: manual: transcripts based, autopunct: n-gram language model to predict sentence breaks and, integrated: sentence breaking of each audio clip is integrated with parsing. This system uses bag-of words (that are domainspecific) to assign weights to the feature. [1, and 17] extract emotion with intensities from prosodic speech patterns of actual speech signals rather than the text it is converted to. The speech signals are analyzed for the context based emotion recognition. The various parameters that are considered for measurements are prosodic speech patterns like speech intensity, pitch and speaking rate. The values in the captured speech signals can be related to specific emotional cues. For example, the pitch in the speech is proportional to arousal that serves as stimulus to produce affective behavior. The intensity of the arousal is proportional to the magnitude of valence of the generated response emotion. Supervised machine learning approach is applied on a set of audio-visual clips and then trained and tested for seven different types of emotions. Autoco-relation algorithm is employed for pitch calculations. The pitch is calculated for overlapping frames and then their contours are evaluated using cepstrum analysis that helps reduce noise. The features are selected for classification using WEKA [17]. Image or Video Based Face And Gesture Inputs – Detection of face and gesture (body language) is also termed as vision www.ijrcct.org
ISSN (Online) 2278- 5841 ISSN (Print) 2320- 5156
based detection. It is generally the practice that certain (human) observers are asked to observe the facial and bodily gestures of the subjects to identify the affects. In the past few years, there have been attempts made to replace the human observers with machines that can analyze the subjects for affects with the help of a reference database of gestures [18]. Such systems are implemented using machine learning algorithms such as Dynamic Bayesian Networks, Linear Regression, Support Vector Machines and, Discriminant Analysis. Body language and gestures have been mostly overlooked by affect analysis systems. Often the body language serves as the most correct form of emotion expression. If subjects tend to pretend false affects, pretensions are conveyed mostly through linguistic medium, but not through body language. It can be useful if combined with other modalities to detect liars and sarcastic behavior, which otherwise might be difficult if only text input is analyzed [7 and 19]. Popular methods are based on temporal treatment of inputs using Neural Networks, Naive Bayes Classifier and, k-NN approaches. The Neural Networks, Naive Bayes Classifier seem to outperform other methods to correctly classify the affects (between 88% and 95%), but these methods need extensive training and works well to classify only the major emotions namely, {angry, happy, sad, neutral}. If gestures are detected through videos it is a time-consuming process and also prone to biasing by the observers due to influence of emotional states of their own. Therefore, for video inputs the major task is to consider the sequences of image frames in a timed manner and then aggregating over the affect outcome [19]. C. Multi-Modal Systems From the cases of individual modalities explained in the previous sub-section, it is clear that, affect recognition could be biased or Page 1195
International Journal of Research in Computer and Communication Technology, Vol 2, Issue 11, November- 2013
inaccurate if, timed inputs are not sequentially analyzed and then aggregated one modality is insufficient for affect information extraction, e.g. a bored person may speak only a few sentences that would be inadequate to indicate his state of boredom, but if in the same time-frame, his body gestures are observed, then aggregation of speech content and body gestures would give more possibilities for the program to correctly learn that the person is in a 'bored' state [8]. Multi-modal systems [6] describe a multi-modal approach where the combined features of audio, video and text are exploited for the sentiment analysis. The system classifies audio-visual data into one of the sentiment category namely positive, negative and neutral. The multi-modal parameters that are considered for evaluation are pause and pitch measurements in speech, polarity (positive or negative) in words, gestures like looking-away, smile. The automatic feature extraction (image, audio, text) from the input video clips is done with the help of tools; Video image: the look-away duration and the Smile duration in the video clip is measured; Audio track: the pause duration and pitch is measured; Text Feature: From the transcription provided with the video clips, the polarity of the sentiment words are evaluated with the help of a supporting lexicon of sentiment words, that provides the sentiment polarity and its magnitude for each sentiment word that it contains. Thus depending on the number of word occurrences, the overall sentiment polarity of the entire text is calculated as the sum of each of the polar words. The input video clips are transcribed www.ijrcct.org
ISSN (Online) 2278- 5841 ISSN (Print) 2320- 5156
with words [6].
Figure 2: Abstraction of Multi-modal Affect Analysis Process D. Tool for visualization of medical data Comparative Effectiveness Research (CER) is a clinically-based decision support tool that uses visual analytics to provide appropriate views of dense medical data [20]. The tool is not equipped with learning algorithms for dynamic prediction of treatment solutions, but can help in the diagnosis and decision-making process for individualized patient treatment. The tool presents a demographic view of the patient’s profile and summarized medication response (medicine-specific)- this is also comparative to response of other patients under similar medication and similar profile. IV. CONCLUSION The major challenge faced by researchers in affective computing is to deal with loads of indirect information hidden in different modalities. Each type of input data needs to be treated differently by the affect model in order to extract and classify maximum features accurately. Most of the analysis requires contextual cues as necessary inputs for such systems. This is lacking in almost all the research work done in this area till date. Page 1196
International Journal of Research in Computer and Communication Technology, Vol 2, Issue 11, November- 2013
Affect recognition and analysis becomes even more difficult, especially in the treatment of psychological or mental disorders, when the subject tries to deliberately cover up the evident emotions or the observer fails to identify some traits. Here again, the input data model must be chosen where patient history and contextual information existing in any modality should be easily incorporated into the algorithm design. In addition, researchers need to deal with the issue of performance improvement in spite of running different algorithms on each mode of input. Achieving this gets complex, when implicit and non-obvious affects are required to be recognized by the system. Therefore to have predictably good algorithms, it will be a good idea to set some protocols for the standardization of stored data sets for affect or medical prognosis. The fuzzy nature of the related information and the affect categories may not be changed, but certain consistency in electronic health records' generation and maintenance may be helpful to researchers to focus more on the AI techniques to be implemented rather than worrying about data format issues. REFERENCES [1] Tal Sobol-Shikler, and Peter Robinson, “Classification of Complex Information: Inference of Co-Occurring Affective States from Their Expressions in Speech”, IEEE Transactions On Pattern Analysis And Machine Intelligence, vol. 32, no. 7, JULY 2010, pp. 1284-1297. [2] http://news.stanford.edu/news/2012/may/conte xt-affect-language-052912.html. [3] B. Liu, “Sentiment Analysis and Subjectivity”, Handbook of Natural Language Processing, Second Edition, (Editors: N. Indurkhya and F.J. Damerau), 2009.
www.ijrcct.org
ISSN (Online) 2278- 5841 ISSN (Print) 2320- 5156
[4] Rosalind W. Picard, “Affective Computing: From Laughter to IEEE”, IEEE Transactions On Affective Computing, vol. 1, no. 1, January-June 2010, pp. 11-17. [5] Federico Cismondi, André S. Fialho, Susana M. Vieira, Shane R. Reti, João M.C. Sousa, Stan N. Finkelstein, “Missing data in medical databases: Impute, delete or classify?”, Artificial Intelligence in Medicine 58 (2013) pp.63– 72. [6] Louis-Philippe Morency, Rada Mihalcea, and Payal Doshi, “Towards Multimodal Sentiment Analysis:Harvesting Opinions from the Web”, In Proceedings of ICMI’11, November 14–18, 2011, Alicante, Spain, pp. 169-176. [7] Andrea Kleinsmith And Nadia BianchiBerthouze, “Affective Body Expression Perception And Recognition- A Survey”, IEEE Transactions On Affective Computing, January-March 2013, vol. 4, No. 1, pp. 15-33. [8] Zhihong Zeng, Maja Pantic, Glenn I. Roisman, and Thomas S. Huang, “A Survey of Affect Recognition Methods: Audio,Visual, and Spontaneous Expressions”, IEEE Transactions On Pattern Analysis And Machine Intelligence, vol. 31, no. 1, January 2009, pp. 39-58. [9] A. Esuli, and F. Sebastiani “Sentiwordnet: A publicly available lexical resource for opinion mining”, in Language Resources and Evaluation(LREC), 2006. [10] Weifu Du, and Songbo Tan, “Building Domain-oriented Sentiment Lexicon by Improved Information Bottleneck”, In Proceedings of CIKM’09, November 2–6, 2009, Hong Kong, China, pp. 1749- 1752. [11] Jiajun Yan, David B. Bracewell, Fuji Ren, and Shingo Kuroiwa, “The Creation of a Chinese Emotion Ontology Based on HowNet”, Engineering Letters, 16:1, EL_16_1_24, online publication 19th February 2008. [12] Alena Neviarouskya, Helmut Prendinger, and Mit Suruishizuka, “Affect Analysis Model: Novel Rule-Based Approach To Affect Page 1197
International Journal of Research in Computer and Communication Technology, Vol 2, Issue 11, November- 2013
Sensing From Text”, Natural Language Engineering 17 (1), Cambridge University Press, September 2010, pp. 95–135. [13] Neil O’Hare, Michael Davy, Adam Bermingham, Paul Ferguson, Páraic Sheridan, Cathal Gurrin, and Alan F. Smeaton, “TopicDependent Sentiment Analysis of Financial Blogs”, In Proceedings of TSA’09, November 6, 2009, Hong Kong, China, pp. 9-16. [14] Alena Neviarouskaya, Helmut Prendinger, and Mitsuru Ishizuka, “SentiFul: A Lexicon for Sentiment Analysis”, IEEE Transactions On Affective Computing, vol. 2, no. 1, JanuaryMarch 2011, pp. 22- 36. [15] Pedro H. Calais Guerra, Adriano Veloso, Wagner Meira Jr., and Virgílio Almeida, “From Bias to Opinion: A Transfer-Learning Approach to Real-Time Sentiment Analysis”, In Proceedings of KDD’11, August 21–24, 2011, San Diego, California, USA, pp. 150158. [16] Joseph Polifroni, Stephanie Seneff, S.R.K. Branavan, Chao Wang, and Regina Barzilay, “Good Grief, I Can Speak It! Preliminary Experiments In Audio Restaurant Reviews”, 2010 IEEE Spoken Language Technology Workshop, SLT 2010, Berkeley, California, USA, December 12-15, 2010, pp. 91-96. [17] Ashish Tawari, and Mohan Manubhai Trivedi, “Speech Emotion Analysis: Exploring the Role of Context”, IEEE Transactions On Multimedia, vol. 12, no. 6, October 2010, pp. 502-509. [18] Shangfei Wang, Zhilei Liu, Siliang Lv, Yanpeng Lv, Guobing Wu, Peng Peng, Fei Chen, and Xufa Wang, “A Natural Visible and Infrared Facial Expression Database for Expression Recognition and Emotion Inference”, IEEE Transactions On Multimedia, vol. 12, no. 7, November 2010, pp. 681-691. [19] Rafael A. Calvo, and Sidney D’Mello, “Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications”, www.ijrcct.org
ISSN (Online) 2278- 5841 ISSN (Print) 2320- 5156
IEEE Transactions On Affective Computing, vol. 1, no. 1, January-June 2010, pp. 18-37. [20] Ketan K. Mane, Chris Bizon, Charles Schmitt, Phillips Owen, Bruce Burchett, Ricardo Pietrobon, and Kenneth Gersing, “VisualDecisionLinc: A Visual Analytics Approach For Comparative EffectivenessBased Clinical Decision Support In Psychiatry”, 2011, Journal of Biomedical Informatics 45 (2012), pp. 101–106. Rekha S. Sugandhi has completed her graduation (B.E. Computers) from K.K. Wagh College of Engineering, Nashik in 1998 and post-graduation (M.Tech Computers) from Government College of Engineering, Pune in 2006, both under the University of Pune, India. She is currently pursuing her Ph.D. in Computer Science and Engineering from SGBAU, Amravati, India. Her research areas include natural language processing and linguistics, affective computing, machine learning and, data mining and has published about 19 papers on topics related to her areas of interest. She is currently working as Associate Professor at M.I.T. College of Engineering, Pune, India. She is a member of IACSIT, ISTE and Machine Intelligence Research Labs. Anjali R. Mahajan has completed her Ph.D. in Computer Science and Engineering from SGBAU, Amravati, India. Her research areas include data mining, computational intelligence, machine learning, networking and, image processing. She is currently working as Professor and Head of Department of Computer Engineering, at Priyadarshini Institute of Engineering and Technology, Nagpur, India. She has more than 20 papers published in various areas in reputed international and national journals. She is a member Machine Intelligence Research Labs, USA among other professional bodies.
Page 1198