| { |
| "paper_id": "S18-1031", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:44:54.975484Z" |
| }, |
| "title": "Yuan at SemEval-2018 Task 1: Tweets Emotion Intensity Prediction using Ensemble Recurrent Neural Network", |
| "authors": [ |
| { |
| "first": "Min", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Yunnan University", |
| "location": { |
| "settlement": "Yunnan", |
| "country": "P.R. China" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Xiaobing", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Yunnan University", |
| "location": { |
| "settlement": "Yunnan", |
| "country": "P.R. China" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper describes the performing system for SemEval-2018 Task 1 subtask 3-Given a tweet, determine the intensity of sentiment or valence (V) that best represents the mental state of the tweeter-a real-valued score between 0 (most negative) and 1 (most positive). The proposed system gets features in tweets from the existing emotional dictionary and represents the word using word embedding, then utilizes the joint representations as the inputs of the bidirectional long short-term memory (BiL-STM) to learn and get the regression result. To boost performance we ensemble several BiLSTMs together. We ranked 6th in subtask 3 among all teams. Our approach achieves the Pearson(All instances) score 0.836 and Pearson(gold in 0.5-1) score 0.667, we outperform the baseline model of this task by 25.1% and 21.8% of Pearson(All instances) and Pearson(gold in 0.5-1) scores respectively.", |
| "pdf_parse": { |
| "paper_id": "S18-1031", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper describes the performing system for SemEval-2018 Task 1 subtask 3-Given a tweet, determine the intensity of sentiment or valence (V) that best represents the mental state of the tweeter-a real-valued score between 0 (most negative) and 1 (most positive). The proposed system gets features in tweets from the existing emotional dictionary and represents the word using word embedding, then utilizes the joint representations as the inputs of the bidirectional long short-term memory (BiL-STM) to learn and get the regression result. To boost performance we ensemble several BiLSTMs together. We ranked 6th in subtask 3 among all teams. Our approach achieves the Pearson(All instances) score 0.836 and Pearson(gold in 0.5-1) score 0.667, we outperform the baseline model of this task by 25.1% and 21.8% of Pearson(All instances) and Pearson(gold in 0.5-1) scores respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Sentiment analysis (SA) is a field of knowledge which deals with the analysis of people's opinions, sentiments, evaluations, appraisals, attitudes and emotions towards particular entities (Liu, 2012) . EmoInt (Mohammad and Bravo-Marquez, 2017) is a shared task hosted by WASSA 2017, aiming to predict the emotion intensity in tweets. SemEval 2018 Task 1 subtask 3 (Mohammad et al, 2018) is similar to EmoInt, however the goal of subtask 3 is to detect valenc-e or sentiment intensity, in which scores are floating point values between 0 and 1, representing low and high intensities of the emotion being expressed, respectively. Obviously we don't know in advance whether twitter's emotional intensity is positive or negative, but in EmoInt task we can determine whether twitter emotions are positive or negative based on one of four datasets: anger, fearness, joy, sadness. This is still a challenging task and remains active areas of research. These setbacks are: extensive usage of hashtags, slang, abbreviations, and emoticons. And tweets are usually typed on mobile devices like mobile phone, laptop or iPad which can result in a substantial amount of typos.", |
| "cite_spans": [ |
| { |
| "start": 188, |
| "end": 199, |
| "text": "(Liu, 2012)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Existing methods for modeling emotion intensity rely vastly on manually constructed lexicons, which contain information about intensity weights for each available word (Mohammad and Bravo-Marquez, 2017a; Neviarouskaya et al., 2007) . The intensity for the whole tweet can be deduced by combining individual scores of words, which is easy and ignores the word order compositionality of the language. Building such lexicons is a labourintensive procedure. We can learn from these models the skills of combining feature extraction and classification or regression stages given a sufficient amount of training data.", |
| "cite_spans": [ |
| { |
| "start": 168, |
| "end": 203, |
| "text": "(Mohammad and Bravo-Marquez, 2017a;", |
| "ref_id": null |
| }, |
| { |
| "start": 204, |
| "end": 231, |
| "text": "Neviarouskaya et al., 2007)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Some deep learning methods are used to process the same question. Deep neural architectures for emotion intensity prediction in tweets (Goel et al., 2017) and character-and word-level recurrent neural models for tweet emotion intensity detection (Lakomkin et al., 2017) .", |
| "cite_spans": [ |
| { |
| "start": 135, |
| "end": 154, |
| "text": "(Goel et al., 2017)", |
| "ref_id": null |
| }, |
| { |
| "start": 246, |
| "end": 269, |
| "text": "(Lakomkin et al., 2017)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In our work, we firstly clean tweets, then build lexical features and find optimal combinations of features to produce a final vector representation of a tweet, next train a neural network regression model and finally get the tweet's intensity scores. In addition, we adjust our models' parameters and through the ensemble models to get the best performing results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We use the dataset provided by the official organizers to train our system, there are 1181 labeled training tweets, 449 labeled dev tweets. Test set are unlabeled 17874 tweets and the gold labels were given only after the evaluation period. Before training model or predicting test set we firstly clean the tweets, this is imperative. We utilize the following prep-rocessing steps.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data cleaning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(1) Hashtags are crucial markers for determining sentiment. The \"#\" symbol is removed and the word itself is retained. Eg, a hashtag like \"#the_best_one\", finally we get \"the best one\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data cleaning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(2) Username mentions, we replace it with \"usename\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data cleaning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(3) Shortening, we transform word \"don't\", \"I've\", \"I'll\" et al into \"do\" \"n't\", \"'ve\", \"'ll\". (4) Punctuations, only \"!\" and \"?\" are retained, others like \";\" \">\" \")\" \",\" \"-\" are deleted. (5) Numerical symbols, considering that the data in the dataset is relatively standardized and there are few numbers, so we remove the all digitals and only keep English words. (6) Extra spaces are removed and all words become lowercase letters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data cleaning", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In order to completely extract features from tweets, we consider two characteristics which are annotated lexicons and pre-trained word embedding.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Extraction", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For extracting lexicon features, we follow the procedure as per the baseline system provided in the WASSA Emotion Intensity Task. The knowledge sources that have been used are: MPQA subjective lexicon (Wilson et al., 2005) , Bing Liu lexicon (Ding et al., 2008) , AFINN (Nielsen, 2011), Sentiment140 (Kiritchenko et al., 2014),", |
| "cite_spans": [ |
| { |
| "start": 201, |
| "end": 222, |
| "text": "(Wilson et al., 2005)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 225, |
| "end": 261, |
| "text": "Bing Liu lexicon (Ding et al., 2008)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotated Lexicon", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Hashtag Sentiment Lexicon (Mohammad and Kiritchenko, 2015), NRC Hashtag Emotion Association Lexicon (Mohammad et al., 2013), NRC Word-Emotion Association Lexicon(, 2013), NRC-10 Expanded Lexicon (Bravo Marquez et al., 2016) and the SentiWordNet (Esuli and Sebastiani, 2007) . Two more features are calculated on the basis of emoticons (obtained from AFINN (Nielsen, 2011)) and negations present in the text. We use several of the above lexicons as following:", |
| "cite_spans": [ |
| { |
| "start": 195, |
| "end": 223, |
| "text": "(Bravo Marquez et al., 2016)", |
| "ref_id": null |
| }, |
| { |
| "start": 245, |
| "end": 273, |
| "text": "(Esuli and Sebastiani, 2007)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NRC", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Emoji Valence (EV): This is a hand classified lexicon of Unicode emojis, rated on a scale of -5 (negative) to 5 (positive).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NRC", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 SentiWordNet (SWN): Calculates positive and negative sentiment score using SentiWordNet, which is an opinion mining resource available through NLTK.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NRC", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Depeche Mood (DM) (Staiano and Guerini, 2014) : This is a lexicon comprised of about 37,000 unigrams annotated with real-valued scores for the emotional states afraid, amused, angry, annoyed, don't care, happy, inspired and sad.", |
| "cite_spans": [ |
| { |
| "start": 20, |
| "end": 47, |
| "text": "(Staiano and Guerini, 2014)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NRC", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Emoticon Sentiment Lexicon: Note that this is a sentiment lexicon drawn from emoticons, and is not an emotion lexicon.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NRC", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 NRC-Emoticon-AffLexNegLex-v1.0: Each line of this lexicon represents a realvalued sentiment score: score = PMI(w, pos) -PMI(w, neg), where PMI stands for Pointwise Mutual Information between a term w and the positive/negative class.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NRC", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 NRC-Hashtag-Sentiment-Lexicon-v1.0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NRC", |
| "sec_num": null |
| }, |
| { |
| "text": "(Moh-ammad and Turney, 2013): The lexicon is an association of words with positive (negative) sentiment generated automatically from tweets with sentimentword hashtags.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NRC", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 NRC-Hashtag-Sentiment-AffLexNegLex-1.0: The same lexicon as Sentiment 140, but here tw-eets with only emotional hashtags are considered during training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NRC", |
| "sec_num": null |
| }, |
| { |
| "text": "The text can be converted into word embedding, which represents each word of the text with a d dimensional vector (Mikolov et al., 2013) . Considering that we have to deal with tweets, we use GloVe word embedding trained on 2 billion tweets from twitter (Pennington et al., 2014) , vectors of 100, 200 and 300 dimensions are provided as part of the pre-trained model. For this work, we use the 300 dimensional vectors of 42B tokens. We also considered GoogleNews-vectors-negative300 in our expe-riments but the effects was not as good as the GloVe word embedding.", |
| "cite_spans": [ |
| { |
| "start": 114, |
| "end": 136, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": null |
| }, |
| { |
| "start": 254, |
| "end": 279, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Embedding", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Based on the application of features extractions and word embedding, we can represent each word in a tweet as a high dimensional space vector, and the dimension of the vector is d \uf02b l . d represents the dimension of GloVe word embedding 300 and l stands for the length of the additional lexical dictionary. After representing the tweets, we need to train models. Since the task requires the computation of a real valued emotion intensity score for the tweets in the test set, we explore several regression methods. Our system is implemented in Keras and we finally choose the best single BiLSTM model, which contains two layers of BiLSTM following the embedding layer and, we add a dropout layer. Some parameters of our model are: dropout probability 0.25 and 0.5 respectively; units of the BiLSTM layers are 512 and 256 respectively; units of the full connection layer is 256. The complete model structure is shown below Figure 1 : Figure 1 : A two layer bidirectional LSTM model.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 922, |
| "end": 931, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 934, |
| "end": 942, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model Training", |
| "sec_num": "4" |
| }, |
| { |
| "text": "When training model on Keras so there only some parameters need to change, we tune the parameters such as the choice of loss function, dropout probability, dimension of the BiLSTM layer. As for feature combination we use all the annotated lexicons mentioned in section 3.1 so as to control the variables and we don't consider the impact of different dictionary combinations on the results, which may be discussed in the future work. Note that all of our tuning processes are done on the development set, each time we finished a model we record the results. Ensembling of some models is universal used method to improve the performance of the overall system by combining predictions of several classifiers. Our system ensembles ten exactly the same BiLSTMs models and average the results, it turns out that the ensemble result is better than that of a single model. That is to say when we ensemble the model, the weight of each single BiLSTM is the same.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System tuning", |
| "sec_num": "5" |
| }, |
| { |
| "text": "All our experiments have been developed using Keras deep learning library with Theano backend, and with CUDA enabled. And all our experiments are performed on a computer with Intel Core(TM) i3 @3.4GHz 16GB of RAM and GeForce GTX 1060 GPU. After testing many neural network models, we finally find the best results on LSTM and BiLSTM models. Table 1 shows the results of a single layer LSTM changing the loss function and word embedding, we can learn that MAE loss function can get the best result with Glove word embedding, in general the performance on Glove word embedding is better than word2vec embedding. Table 2 shows the results of a single BiLSTM changing the loss function and integrating ten models under different loss functions and different word embedding we can learn that MAPE loss function can get the best result with Glove word embedding, in general the performance on Glove word embedding is better than word2vec embedding. Table 3 is the result of double layers BiLSTM changing the loss function and integrating ten models under different loss functions and different word embedding we can learn that MAPE loss function can get the best result with Glove word embedding, in general the performance on Glove word embedding is better than word2vec embedding.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 341, |
| "end": 348, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 610, |
| "end": 617, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 943, |
| "end": 950, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment and results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The system in this subtask are evaluated using the Pearson correlation coefficient, which computes a bivariate linear coefficient, and the secondary evaluation metrics, which is Pearson correlation for a subset of the test set that includes only those tweets with intensity score greater or equal to 0.5. We present the results of the system submitted to the competition leaderboard in Table 4 . The score of our system is 0.836 (Pearson) and 0.667 (Pearson gold in 0.5-1). Note that the model we used on the test set is the best model on the development set, i.e., in Table 3 ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 386, |
| "end": 393, |
| "text": "Table 4", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 569, |
| "end": 576, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment and results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In this paper, we propose a deep learning framework to predict the emotion intensity in tweets. The proposed system is based on two layers BiLSTM and the last layer of model using a linear regression so that we can get the intensity score, which is a consecutive emotional value. Before training model we implement features extraction and represent the tweets by word embedding. Both single model and ensemble model are described in detail with a view of making our experiments replicable. The optimal parameters are mentioned along with our method of bringing the approaches together. Our submitted system beats the baseline system by about 25.1% on the test set. Our source code is in here https://github.com/ynuwm/SemEval-2018", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "7" |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Sentiment analysis and opinion mining Synthesis Lectures on Human Language Technologies", |
| "authors": [ |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bing Liu. 2012. Sentiment analysis and opinion mining Synthesis Lectures on Human Language Technologies. Morgan &Claypool publishers.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "WASSA-2017 shared task on emotion intensity", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Saif", |
| "suffix": "" |
| }, |
| { |
| "first": "Felipe", |
| "middle": [], |
| "last": "Mohammad", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bravo-Marquez", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Saif M. Mohammad and Felipe Bravo-Marquez. 2017.WASSA-2017 shared task on emotion intensity. In Proceedings of the Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA).", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Textual affect sensing for sociable and expressive online comm-unication", |
| "authors": [ |
| { |
| "first": "Alena", |
| "middle": [], |
| "last": "Neviarouskaya", |
| "suffix": "" |
| }, |
| { |
| "first": "Helmut", |
| "middle": [], |
| "last": "Prendinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Mitsuru", |
| "middle": [], |
| "last": "Ishizuka", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alena Neviarouskaya, Helmut Prendinger, and Mitsuru Ishizuka. 2007. Textual affect sensing for sociable and expressive online comm-unication.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Opinion mining with deep recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Ozan", |
| "middle": [], |
| "last": "Irsoy", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "the Proceedings of the Conference on EMLNP", |
| "volume": "", |
| "issue": "", |
| "pages": "720--728", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ozan Irsoy and Claire Cardie. 2014. Opinion mining with deep recurrent neural networks. In the Proceedings of the Conference on EMLNP. pages 720-728.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Recursive deep models for semantic compositionality over a sentimenttreebank", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Perelygin", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Jean", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Chuang", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Andrew", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Potts", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "the Proceedings of the Conference on EMLNP", |
| "volume": "1631", |
| "issue": "", |
| "pages": "1631--1642", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentimenttreebank. In the Proceedings of the Conference on EMLNP. volume 1631, pages 1631-1642.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Recognizing contextual polarity in phraselevel sentiment analysis", |
| "authors": [ |
| { |
| "first": "Theresa", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| }, |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Hoffmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "HLT/EMNLP 2005,Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "347--354", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann.2005. Recognizing contextual polarity in phraselevel sentiment analysis. In HLT/EMNLP 2005,Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, 6-8 October 2005, Vancouver, British Columbia, Canada. The Association for Computational Linguistics, pages 347-354.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A new ANEW: evaluation of a word list for sentiment analysis in microblogs", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Finn Arup Nielsen", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the ESWC2011Workshop on 'Making Sense of Microposts': Big things come in small packages", |
| "volume": "718", |
| "issue": "", |
| "pages": "93--98", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Finn Arup Nielsen. 2011. A new ANEW: evaluation of a word list for sentiment analysis in microblogs. In Matthew Rowe, Milan Stankovic, Aba- SahDadzie, and Mariann Hardey, editors, Proceedings of the ESWC2011Workshop on 'Making Sense of Microposts': Big things come in small packages, Heraklion, Crete, Greece, May 30, 2011. CEUR-WS.org,volume 718 of CEUR Workshop Proceedings, pages 93-98.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Using hashtags to capture fine emotion categories from tweets", |
| "authors": [ |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Saif", |
| "suffix": "" |
| }, |
| { |
| "first": "Svetlana", |
| "middle": [], |
| "last": "Kiritchenko", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Computational Inte-lligence", |
| "volume": "31", |
| "issue": "2", |
| "pages": "301--326", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohammad Saif and Svetlana Kiritchenko. 2015. Using hashtags to capture fine emotion categories from tweets. Computational Inte-lligence 31(2):301-326.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Crowd sourcing a word-emotion association lexicon", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Saif", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [ |
| "D" |
| ], |
| "last": "Mohammad", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "29", |
| "issue": "", |
| "pages": "436--465", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Saif M. Mohammad and Peter D. Turney. 2013. Crowd sourcing a word-emotion association lexicon 29(3):436-465.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Depeche mood: A lexicon for emotion analysis from crowdannotated news", |
| "authors": [ |
| { |
| "first": "Jacopo", |
| "middle": [], |
| "last": "Staiano", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Guerini", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1405.1605" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacopo Staiano and Marco Guerini. 2014. Depeche mood: A lexicon for emotion analysis from crowd- annotated news. arXiv preprint arXiv:1405.1605 .", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Crowdsourcing a word-emotion association lexicon", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Saif", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mohammad", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Computational Intelligence", |
| "volume": "29", |
| "issue": "3", |
| "pages": "436--465", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Saif M Mohammad and Peter D Turney. 2013. Crowdsourcing a word-emotion association lexicon. Computational Intelligence 29(3):436- 465.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP). Pages 1532-1543.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Semeval-2018 Task 1: Affect in tweets", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Saif", |
| "suffix": "" |
| }, |
| { |
| "first": "Felipe", |
| "middle": [], |
| "last": "Mohammad", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Bravo-Marquez", |
| "suffix": "" |
| }, |
| { |
| "first": "Svetlana", |
| "middle": [], |
| "last": "Salameh", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kiritchenko", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval2018)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Saif M. Mohammad, Felipe Bravo-Marquez, Moh- ammad Salameh, and Svetlana Kiritchenko. 2018.Semeval-2018 Task 1: Affect in tweets. In Proceedings of International Workshop on S- emantic Evaluation (SemEval2018), New Orl- eans, LA, USA.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "type_str": "table", |
| "content": "<table><tr><td/><td>#</td><td>Team</td><td>P</td><td>P (gold</td></tr><tr><td/><td/><td/><td/><td>0.5-1)</td></tr><tr><td/><td>1</td><td>SeerNet</td><td>0.873</td><td>0.697</td></tr><tr><td/><td>2</td><td>TCS Research</td><td>0.861</td><td>0.680</td></tr><tr><td/><td>3</td><td>PlusEmo2Vec</td><td>0.860</td><td>0.691</td></tr><tr><td/><td>4</td><td>NTUA-SLP</td><td>0.851</td><td>0.688</td></tr><tr><td/><td>5</td><td>Amobee</td><td>0.843</td><td>0.644</td></tr><tr><td/><td>6</td><td>Yuan</td><td>0.836</td><td>0.667</td></tr><tr><td/><td>7</td><td>nlpzzx</td><td>0.835</td><td>0.670</td></tr><tr><td>Loss function</td><td>Pearson score</td><td/><td/></tr><tr><td>MSE(Glove)</td><td>0.804</td><td/><td/></tr><tr><td>MAE(Glove)</td><td>0.818</td><td/><td/></tr><tr><td>MAPE(Glove)</td><td>0.815</td><td/><td/></tr><tr><td>MSLE(Glove)</td><td>0.801</td><td/><td/></tr><tr><td>MSE(w2v)</td><td>0.801</td><td/><td/></tr><tr><td>MAE(w2v)</td><td>0.798</td><td/><td/></tr><tr><td>MAPE(w2v)</td><td>0.799</td><td/><td/></tr><tr><td>MSLE(w2v)</td><td>0.786</td><td/><td/></tr><tr><td colspan=\"2\">Table 1: Performance on development dataset. Single</td><td/><td/></tr><tr><td colspan=\"2\">layer LSTM under different loss functions and</td><td/><td/></tr><tr><td>different word embedding.</td><td/><td/><td/></tr><tr><td>Loss function</td><td>Pearson score</td><td/><td/></tr><tr><td>MSE(Glove)</td><td>0.799</td><td/><td/></tr><tr><td>MAE(Glove)</td><td>0.820</td><td/><td/></tr><tr><td>MAPE(Glove)</td><td>0.822</td><td/><td/></tr><tr><td>MSLE(Glove)</td><td>0.801</td><td/><td/></tr><tr><td>MSE(w2v)</td><td>0.797</td><td/><td/></tr><tr><td>MAE(w2v)</td><td>0.810</td><td/><td/></tr><tr><td>MAPE(w2v)</td><td>0.799</td><td/><td/></tr><tr><td>MSLE(w2v)</td><td>0.784</td><td/><td/></tr><tr><td colspan=\"2\">Table 2: Performance on development dataset.</td><td/><td/></tr><tr><td colspan=\"2\">Ensemble result of single layer BiLSTM under diff-</td><td/><td/></tr><tr><td colspan=\"2\">erent loss functions and different word embedding.</td><td/><td/></tr><tr><td>Loss function</td><td>Pearson score</td><td/><td/></tr><tr><td>MSE(Glove)</td><td>0.805</td><td/><td/></tr><tr><td>MAE(Glove)</td><td>0.826</td><td/><td/></tr><tr><td>MAPE(Glove)</td><td>0.827</td><td/><td/></tr><tr><td>MSLE(Glove)</td><td>0.806</td><td/><td/></tr><tr><td>MSE(w2v)</td><td>0.796</td><td/><td/></tr><tr><td>MAE(w2v)</td><td>0.785</td><td/><td/></tr><tr><td>MAPE(w2v)</td><td>0.794</td><td/><td/></tr><tr><td>MSLE(w2v)</td><td>0.783</td><td/><td/></tr><tr><td colspan=\"2\">Table 3: Performance on development data-set.</td><td/><td/></tr><tr><td colspan=\"2\">Ensemble result of double layers BiLSTM under diff-</td><td/><td/></tr><tr><td colspan=\"2\">erent loss functions and different word embedding.</td><td/><td/></tr></table>", |
| "num": null, |
| "text": "the third line.", |
| "html": null |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "content": "<table/>", |
| "num": null, |
| "text": "Performance on test dataset. Final results in about test set on leaderboard and our system ranks 6th overall.", |
| "html": null |
| } |
| } |
| } |
| } |