| { |
| "paper_id": "S16-1029", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:27:00.518048Z" |
| }, |
| "title": "DSIC-ELIRF at SemEval-2016 Task 4: Message Polarity Classification in Twitter using a Support Vector Machine Approach", |
| "authors": [ |
| { |
| "first": "V\u00edctor", |
| "middle": [], |
| "last": "Mart\u00ednez", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "vmartinez2@dsic.upv.es" |
| }, |
| { |
| "first": "Ferran", |
| "middle": [], |
| "last": "Pla", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "fpla@dsic.upv.es" |
| }, |
| { |
| "first": "Llu\u00eds-F", |
| "middle": [], |
| "last": "Hurtado", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "lhurtado@dsic.upv.es" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper contains the description of our participation at task 4 (sub-task A, Message Polarity Classification) of SemEval-2016. Our proposed system consists mainly of three steps. Firstly, the preprocessing step includes the tokenization and identification of special elements including URLs, hashtags, user mentions and emoticons. The second step aims at selecting and extracting the feature set. Finally, a supervised approach, in particular a Support Vector Machine has been applied to tackle the classification problem.", |
| "pdf_parse": { |
| "paper_id": "S16-1029", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper contains the description of our participation at task 4 (sub-task A, Message Polarity Classification) of SemEval-2016. Our proposed system consists mainly of three steps. Firstly, the preprocessing step includes the tokenization and identification of special elements including URLs, hashtags, user mentions and emoticons. The second step aims at selecting and extracting the feature set. Finally, a supervised approach, in particular a Support Vector Machine has been applied to tackle the classification problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In the last few years, Twitter has become a source of a huge amount of information which introduces endless possibilities of research in the field of Sentiment Analysis. Sentiment Analysis, also called Opinion Mining, is a research area within Natural Language Processing whose aim is to identify the underlying emotion of a certain document, sentence or aspect (Liu, 2012) . As a case in point, Opinion Mining has been applied for recognizing reviews as recommended or not recommended (Turney, 2002) and for generating aspect-based summaries (Hu and Liu, 2004) .", |
| "cite_spans": [ |
| { |
| "start": 362, |
| "end": 373, |
| "text": "(Liu, 2012)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 486, |
| "end": 500, |
| "text": "(Turney, 2002)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 543, |
| "end": 561, |
| "text": "(Hu and Liu, 2004)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The goal of SemEval-2016 task 4 (Nakov et al., 2016) consists of categorizing tweets as positive, negative or neutral concerning the opinion that a user holds with regard to a certain topic. One issue to take into consiteration is that the language adopted in Social Media, especially in Twitter, needs to be treated differently than normalized language due to the use of specific characteristics such as users, hashtags, emoticons and slang as well as some linguistic phenomena including sarcasm and irony.", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 52, |
| "text": "(Nakov et al., 2016)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our system is closely related to (Gim\u00e9nez et al., 2015) . Section 2 describes the proposed method which consists mainly of three steps. Firstly, the preprocessing step includes the tokenization and identification of special elements including URLs, hashtags, user mentions and emoticons. The second step aims at selecting and extracting the feature set. Finally, a supervised approach such as Support Vector Machine (SVM) has been applied to tackle the classification problem. In section 3, the experiments carried out are described. Finally, section 4 discusses the results obtained for the different experiments in the tuning phase and in the official competition.", |
| "cite_spans": [ |
| { |
| "start": 33, |
| "end": 55, |
| "text": "(Gim\u00e9nez et al., 2015)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this section, we describe the steps carried out in this work to achieve the results obtained in Semeval 2016. In this approach, a matrix of ocurrences, in which tweets are represented as rows and features as columns, normalized by tf-idf was used to represent whether a certain feature appears or not in a tweet.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Overview", |
| "sec_num": "2" |
| }, |
| { |
| "text": "After fetching all the data from Twitter, our corpus needs to be preprocessed. As Twitter makes an extensive use of emoticons, URLs and concrete elements such as @User mentions and #hashtags, some regex are utilized to substitute these mentioned elements of special interest by labels of the form <URL>, <HASH>, <USER>and <EMOTICON>that let us count the amount of ap-pearances in a certain tweet. Indeed, after tokenizing the tweet, punctuation and stop words are removed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preprocessing", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In this paper, the following features have been tried out althought not all were included for the final submision: see section 4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Set", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "N-grams at word-level were selected ranging from 1-grams to 6-grams. These were combined in the experimentation process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Set", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Skip-grams at the word level with 2 words and 1 gap between them. As an example, \"What an amazing film\" will generate the following list of skipgrams [(\"What\",\"amazing\"),(\"an\",\"film\")]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Set", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "K most frequent Skip-grams. This feature takes the k-most frequent Skip-grams and discards the other ones which are under the k threshold. Lexicons 1. Jeffrey (Hu and Liu, 2004) : This lexicon contains two sets of words: a positive and a negative word set. From this lexicon we obtain two scores coming from the addition of the positive words appering in a tweet and, likewise, from the addition of the negative words.", |
| "cite_spans": [ |
| { |
| "start": 159, |
| "end": 177, |
| "text": "(Hu and Liu, 2004)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Set", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "2. NRC Emotion Lexicon (Mohammad and Turney, 2013): This lexicon contains a set of words and a value (0 or 1) expressing whether a word is associated to a certain emotion such as anger, anticipation, disgust, fear, joy, sadness, surprise and sad.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Set", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Twitter Features.The way of expressing ideas in Twitter as in other social networks differs from the language used in formal writing. That is why we should capture the peculiarities about this language that could be useful for identifying the polarity of a tweet in certain situations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Set", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u2022 Elongated Words We count the number of elongated words. For instance, \"I love you sooooo much\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Set", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u2022 ALL CAPS We count the number of words in upper case.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Set", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u2022 #Hashtags. We count the number of hashtags in a tweet.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Set", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Finally, a tf-idf normalization was applied in all the selected features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Set", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In this work, we classified the tweets polarity using a SVM formalism. An implementation using regularized linear models with stochastic gradient descent (SGD) learning is provided by the scikit-learn toolkit (Pedregosa et al., 2011) .", |
| "cite_spans": [ |
| { |
| "start": 209, |
| "end": 233, |
| "text": "(Pedregosa et al., 2011)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In this section, we expose the experiments carried out. Every experiment applies the preprocessing explained in section 2.1. The dataset used to conduct the experimentation was the one adopted on SemEval-2013 task 2 subtask B (Nakov et al., 2013) . Indeed, all the experimentation applies a linear SVM as a classifier. The following lines express the features implemented in the most successful experiments.", |
| "cite_spans": [ |
| { |
| "start": 226, |
| "end": 246, |
| "text": "(Nakov et al., 2013)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Experiment 1 -Unigrams and Bigrams -Jeffrey's Lexicon.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Experiment 2 -1-6 grams -Jeffrey's Lexicon. -Unigrams and Bigrams -Jeffrey's and NRC Emotion Lexicons.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "-#Hashtags", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Experiment 8", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "-Unigrams and Bigrams -Jeffrey's and NRC Emotion Lexicons.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "-ALLCAPS", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Experiment 9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "-Unigrams and Bigrams -Jeffrey's and NRC Emotion Lexicons.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "-Elongated Words", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "This section summarizes the results of the tuning phase. As we can see in Table 1 , the best approach is the one used in experiment 5 which uses only unigrams, bigrams and both lexicons. This fact shows the importance of unigrams and bigrams as well as the relevance of using lexicons which can improve considerably a message polarity classification model. Moreover, using n-grams larger than bigrams (6-grams in our experiments) can introduce noise in the model. As we can see in Table 1 , Twitter features decrease the performance of the classification. In experiment 6, we use all Twitter features together which leads us to a decreasing of (F1 pos +F1 neg )/2 from 0.6399 to 0.4857. Likewise, the results of experiments 7, 8 and 9 which use Twitter features individually show a diminution of similar magnitute in the evaluation measure (F1 pos +F1 neg )/2.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 74, |
| "end": 81, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 481, |
| "end": 488, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In this work, we presented Skip-grams as an alternative to N-grams and we see that N-grams performed slightly better than Skip-grams. However, this difference in the performance is not statistically significant and can vary between different corpora. In addition, experiment 4 includes a variation taking only the one hundred most frequent Skip-grams. The comparison between experiment 3 and 4 shows that using the most frequent Skip-grams leads to better results than using all the Skip-grams generated.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "N-grams vs Skip-grams", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For the competition, the model used in experiment 5 which outperformed the others in the tuning phase was submitted. This model consists of unigrams, bigrams and both lexicons (Jeffrey and NRC emotion lexicon). In the official rank our system achieved the 22nd out of 34 teams.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Competition Results", |
| "sec_num": "4.2" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work has been partially funded by the project ASLP-MULAN: Audio, Speech and Language Processing for Multimedia Analytics (Spanish MINECO TIN2014-54288-C4-3-R).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Elirf: A support vector machine approach for sentiment analysis tasks in twitter at semeval-2015", |
| "authors": [ |
| { |
| "first": "Mayte", |
| "middle": [], |
| "last": "Gim\u00e9nez", |
| "suffix": "" |
| }, |
| { |
| "first": "Pla", |
| "middle": [], |
| "last": "Ferran", |
| "suffix": "" |
| }, |
| { |
| "first": "Llu\u00eds-F", |
| "middle": [], |
| "last": "Hurtado", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "574--581", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mayte Gim\u00e9nez, Pla Ferran, and Llu\u00eds-F. Hurtado. 2015. Elirf: A support vector machine approach for senti- ment analysis tasks in twitter at semeval-2015. In In Proceedings of the 9th International Workshop on Se- mantic Evaluation (SemEval 2015), pages 574--581. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Mining and summarizing customer reviews", |
| "authors": [ |
| { |
| "first": "Minqing", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining", |
| "volume": "", |
| "issue": "", |
| "pages": "168--177", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 168-177. ACM.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Sentiment analysis and opinion mining. Synthesis lectures on human language technologies", |
| "authors": [ |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "5", |
| "issue": "", |
| "pages": "1--167", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bing Liu. 2012. Sentiment analysis and opinion min- ing. Synthesis lectures on human language technolo- gies, 5(1):1-167.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Nrc emotion lexicon", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Saif", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mohammad", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "NRC Technical Report", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Saif M Mohammad and Peter D Turney. 2013. Nrc emo- tion lexicon. Technical report, NRC Technical Report.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Semeval-2013 task 2: Sentiment analysis in twitter", |
| "authors": [ |
| { |
| "first": "Preslav", |
| "middle": [], |
| "last": "Nakov", |
| "suffix": "" |
| }, |
| { |
| "first": "Zornitsa", |
| "middle": [], |
| "last": "Kozareva", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Ritter", |
| "suffix": "" |
| }, |
| { |
| "first": "Sara", |
| "middle": [], |
| "last": "Rosenthal", |
| "suffix": "" |
| }, |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Theresa", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Preslav Nakov, Zornitsa Kozareva, Alan Ritter, Sara Rosenthal, Veselin Stoyanov, and Theresa Wilson. 2013. Semeval-2013 task 2: Sentiment analysis in twitter.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Association for Computational Linguistics", |
| "authors": [], |
| "year": null, |
| "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval '16", |
| "volume": "4", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "task 4: Sentiment analysis in Twitter. In Proceedings of the 10th International Workshop on Semantic Eval- uation, SemEval '16, San Diego, California, June. As- sociation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Scikit-learn: Machine learning in python", |
| "authors": [ |
| { |
| "first": "Fabian", |
| "middle": [], |
| "last": "Pedregosa", |
| "suffix": "" |
| }, |
| { |
| "first": "Ga\u00ebl", |
| "middle": [], |
| "last": "Varoquaux", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Gramfort", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Michel", |
| "suffix": "" |
| }, |
| { |
| "first": "Bertrand", |
| "middle": [], |
| "last": "Thirion", |
| "suffix": "" |
| }, |
| { |
| "first": "Olivier", |
| "middle": [], |
| "last": "Grisel", |
| "suffix": "" |
| }, |
| { |
| "first": "Mathieu", |
| "middle": [], |
| "last": "Blondel", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Prettenhofer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ron", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Dubourg", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "The Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2825--2830", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vin- cent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. The Journal of Machine Learning Research, 12:2825-2830.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th annual meeting on association for computational linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "417--424", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter D Turney. 2002. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classifi- cation of reviews. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 417-424. Association for Computational Lin- guistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": {} |
| } |
| } |