Cite this paper as: Peng B., Li J., Chen J., Han X., Xu R., Wong KF. (2015) Trending Sentiment-Topic Detection on Twitter. In: Gelbukh A. (eds) Computational ...
Jun 17, 2016 - at identifying all the aspect terms present in a given set of sentences with pre-identified en- tities such as restaurants, laptops. An opinion.
However, in case of sentiment detection in tweets, acquiring training data typically ..... Android phone), or events (e.g., Japan earthquake, NHL playoffs) [28, 38].
evolution by applying sentiment analysis on Stack Overflow (Rahman et al. 2015), app .... sentiment, i.e. URLs, code snippets, and HTML tags. .... sentiment analysis tools in empirical software engineering studies (Blaz and Becker 2016,.
Moschitti [14]. Whereas Turney [15] introduced an unsupervised learning ... For example, Mohammad & Turney ..... [7] B. Liu and L. Zhang, "A survey of opinion mining and .... Media Text," Johns Hopkins APL Technical Digest, vol. 30, pp.
Jun 3, 2012 - [email protected]. Abstract ... We chose the dataset used by Athar (2011) compris- ing 310 papers taken ... This dataset is rather large (8736 citations) and .... S. Bird, R. Dale, B.J. Dorr, B. Gibson, M.T. Joseph, M.Y..
Feb 16, 2016 - to improve their marketing campaign, product design, and user experience. .... Davis and O'Flaherty [12] evaluate the automated sentiment ...
Eval 2015's task 10 âSentiment Analysis in Twitter,â ... timent detection usually involve methods from ma- ... four of the best-performing approaches from the last.
sold online. Industry or ... online information manually is time consuming. ..... la te te rrib le d ark. Du ll w aste d u m b sa d stu p id n asty h o rrib le sin iste r aw fu.
Twitter Sentiment Detection via Ensemble Classification. Using Averaged Confidence Scores. Matthias Hagen, Martin Potthast, Michel Büchner, and Benno ...
Webis: An Ensemble for Twitter Sentiment Detection ... Eval 2015's task 10 âSentiment Analysis in Twitter,â ..... from Wikipedia was manually scored as positive,.
The ways in which students' listening, speaking, ..... play, listen to and watch the learning materials. ... can use the u-Test tool to take tests and evaluate their.
4. Music Technology Group, UPF Universitat Pompeu Fabra, Barcelona, Spain. 5. TKK, Department of Signal Processing and Acoustics, Espoo, Finland. 6.
Mar 31, 2010 - caTBUA: Context-aware ticket-based binding update authentication protocol for trust-enabled mobile networks. Ilsun You1, Jong-Hyouk Lee2 ...
... while listening. This is obtained through the development of a networked endtoend platform for mobile ... A particularly relevant aspect of active listening is its social, collaborative implication: active listening ..... â¡For no good use at al
the nearest InfoStation via available Bluetooth, WiFi, or WiMAX connection. The ... WiMA; WiFi ... repository of all profiles relating to both users and services alike.
Sep 11, 2013 - and emotion detection in Twitter, see for instance [14]. 4 Conclusion ... In J. Cole (Ed.), Nebraska Symposium on Motivation, 1971 (Vol. 19, pp.
Anomaly Detection through Enhanced Sentiment. Analysis on Social Media Data. Zhaoxia WANG, Victor Joo Chuan TONG, Xin XIN. Social and Cognitive ...
The remainder includes many friendly exchanges in social network sites (SNSs), ... However, analysing social web texts using traditional sentiment analysis ...
corpora of movie reviews: Cornell polarity dataset in English and Mu-. choCine in Spanish. Experimental ... other types of social media. Internet has turned into a ...
In this paper, we present a proposed system designed for sentiment detection for micro-blog data in Chinese. Our system surprisingly benefits from the lack of ...
attention (Pang et al., 2002; Na et al., 2004; Kennedy and. Inkpen .... and perspective (de Haan, 1995). ... from the Cornell movie-review dataset2 (Pang and Lee,.
algorithm (Pang and Lee, 2005); by taking a semi- ... hierarchical classifier trees which combine stan- .... text classifiers (Pang et al., 2002; Pang and Lee,. 2005) ...
Sep 11, 2013 - 2 The Web API is currently accessible on http://talc2.loria.fr/empathic ... its feelings over a textual c
Feb 10, 2011 - ... sentiment terms and ambiguous terms. â» Uses context terms for disambiguation. â· Derived from online reviews (Amazon, TripAdvisor).
Context-Aware Sentiment Detection Albert Weichselbraun February 10, 2011
Agenda
I
Motivation On the Need for Contextualization I Indicators for Missing Context I
I
Method Context-Aware Sentiment Detection I Creation of Contextualized Sentiment Lexicons I Example I Cross-corpus Contextualized Sentiment Lexicons I
I
Evaluation
I
Outlook and Conclusions
2 / 28
Motivation
I
I
Pang et. al (2002): state of the art machine learning approaches do not unfold their full potential when applied to sentiment detection Lexicon-based approach I I I
no labeled training corpus necessary applicable across domains throughput
Motivation
3 / 28
On the Need for Contextualization
Positive The repair of my car was satisfying. This movie’s plot is unpredictable. The long peace brought wealth and safety to the people.
Negative I had many complaints after my camera’s repair. The breaks of this car are unpredictable. This peace is a lie.
Motivation
4 / 28
Frequency Diagram
Motivation
5 / 28
Frequency Diagram
Motivation
6 / 28
Frequency Diagram
Motivation
7 / 28
Frequency Diagram
Motivation
8 / 28
Method - Contextualized Sentiment Lexicon
I
Use a contextualized sentiment lexicon I I I
I
Based on ordinary sentiment lexicons Contains stable sentiment terms and ambiguous terms Uses context terms for disambiguation
Derived from online reviews (Amazon, TripAdvisor)
Method
9 / 28
Method - Context Aware Sentiment Detection Refined Sentiment Detection X stotal = n(ti )[s(ti ) + s 0 (ti |c)]
Identify ambiguous terms (s 0 ) → frequency diagrams σi
≥ v
(3)
µi + σ i
≥ w
(4a)
µi − σ i
≤ −w
(4b)
I
Learn context terms (c) for disambiguation → conditional probabilities
I
Recalculate the sentiment value of the contextualized sentiment terms
Method
12 / 28
Example The service staff was friendly. They accomplished the repair of my car’s motor very quickly. After driving it for another three months I can say that the motor is as reliable as it was before.
The service staff was friendly. They accomplished the repair of my car’s motor very quickly. After driving it for another three months I can say that the motor is as reliable as it was before.
repair Context Term (ci ) reliable friendly quickly
P(C+ |ci ) 0.80 0.70 0.65
⇒ repair is used in a positive context → positive sentiment
Method
14 / 28
Method - Real World Examples
Ambiguous Term busy complaint cool
SVorig
Example
1 -1 -1
expensive
-1
quality
1
better cost
1 -1
The hotel is located on a busy road. My only complaint would be the service. Our room felt like a really cool European apartment with a rooftop terrace. The room was one of the more expensive hotels in Vienna but still excellent. Poor quality copies with one edge always dark. Let’s hope they work better. Toner cost is way behind competitors.
Evaluations 1. Comparison to a baseline – Do we outperform a lexicon-based approach which does not consider context? 2. Intra-domain sentiment detection – Does the removal of unstable sentiment terms has a positive effect? 3. Cross-domain sentiment detection – Determine the cross-domain performance of a generic contextualized sentiment lexicon. 4. Comparison to a machine learning approach – Intra-domain and cross-domain performance.
Evaluation
21 / 28
Evaluation - Setting
I I
Method: 10-fold cross validation Test corpora: I I I
Equal number of positive and negative reviews. Amazon: 2,500 reviews TripAdvisor: 1,800 reviews
Evaluation
22 / 28
Evaluation - Context Aware Sentiment Detection Corpus: Amazon R
Baseline P F1
Context Aware R P F1
Pos
0.80
0.64
0.71
0.75
0.75
0.74
Neg
0.53
0.74
0.62
0.71
0.79
0.73
Corpus: TripAdvisor R
Baseline P F1
Context Aware R P F1
Pos
0.96
0.60
0.74
0.97
0.66
0.79
Neg
0.34
0.90
0.49
0.46
0.95
0.61 Evaluation
23 / 28
Evaluation - Intra-Domain Performance
Test corpus: Amazon Domain-specific (Amazon) R P F1 Pos Neg
0.75 0.71
0.75 0.79
0.74 0.73
Generic P F1
R 0.77 0.67
0.72 0.77
0.74 0.72
Test corpus: TripAdvisor Domain-specific (TripAdvisor) R P F1 Pos Neg
0.97 0.46
0.66 0.95
0.79 0.61
R 0.89 0.66
Generic P F1 0.74 0.87
0.81 0.75
Evaluation
24 / 28
Evaluation - Cross Domain Performance
Test corpus: Amazon Domain-specific (TripAdvisor) R P F1 Pos Neg
0.76 0.58
0.67 0.73
0.71 0.64
Generic P F1
R 0.77 0.67
0.72 0.77
0.74 0.72
Test corpus: TripAdvisor Domain-specific (Amazon) R P F1 Pos Neg
0.84 0.58
0.69 0.8
0.75 0.66
R 0.89 0.66
Generic P F1 0.74 0.87
0.81 0.75
Evaluation
25 / 28
Evaluation - Na¨ıve Bayes (NLTK)
Test TripAdvisor Amazon TripAdvisor Training Amazon
⊕ ⊕
87 89 75 60
32 70 81 77
Evaluation
26 / 28
Summary
I
Lexicon-based approaches: I I I I
I
Simple, no labelled data required Applicable across domains High throughput Can serve as a baseline
Machine Learning approaches: I I
Powerful, but domain-specific Require labelled training data
→ The introduced approach combines these advantages (cross-domain, high throughput, high performance)