| { |
| "paper_id": "S14-2018", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:32:53.750613Z" |
| }, |
| "title": "Biocom Usp: Tweet Sentiment Analysis with Adaptive Boosting Ensemble", |
| "authors": [ |
| { |
| "first": "N\u00e1dia", |
| "middle": [ |
| "F F" |
| ], |
| "last": "Silva", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Eduardo", |
| "middle": [ |
| "R" |
| ], |
| "last": "Hruschka", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Rafael", |
| "middle": [], |
| "last": "Estevam", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "estevam@dc.ufscar.br" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hruschka", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We describe our approach for the SemEval-2014 task 9: Sentiment Analysis in Twitter. We make use of an ensemble learning method for sentiment classification of tweets that relies on varied features such as feature hashing, part-of-speech, and lexical features. Our system was evaluated in the Twitter message-level task. This work is licensed under a Creative Commons Attribution 4.0 International Licence.", |
| "pdf_parse": { |
| "paper_id": "S14-2018", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We describe our approach for the SemEval-2014 task 9: Sentiment Analysis in Twitter. We make use of an ensemble learning method for sentiment classification of tweets that relies on varied features such as feature hashing, part-of-speech, and lexical features. Our system was evaluated in the Twitter message-level task. This work is licensed under a Creative Commons Attribution 4.0 International Licence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The sentiment analysis is a field of study that investigates feelings present in texts. This field of study has become important, especially due to the internet growth, the content generated by its users, and the emergence of the social networks. In the social networks such as Twitter people post their opinions in a colloquial and compact language, and it is becoming a large dataset, which can be used as a source of information for various automatic tools of sentiment inference. There is an enormous interest in sentiment analysis of Twitter messages, known as tweets, with applications in several segments, such as (i) directing marketing campaigns, extracting consumer reviews of services and products (Jansen et al., 2009) ; (ii) identifying manifestations of bullying (Xu et al., 2012) ; (iii) predicting to forecast box-office revenues for movies (Asur and Huberman, 2010) ; and (iv) predicting acceptance or rejection of presidential candidates (Diakopoulos and Shamma, 2010; O'Connor et al., 2010) .", |
| "cite_spans": [ |
| { |
| "start": 709, |
| "end": 730, |
| "text": "(Jansen et al., 2009)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 777, |
| "end": 794, |
| "text": "(Xu et al., 2012)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 857, |
| "end": 882, |
| "text": "(Asur and Huberman, 2010)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 973, |
| "end": 986, |
| "text": "Shamma, 2010;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 987, |
| "end": 1009, |
| "text": "O'Connor et al., 2010)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "One of the problems encountered by researchers in tweet sentiment analysis is the scarcity of public datasets. Although Twitter sentiment datasets have already been created, they are either small -such as Obama-McCain Debate corpus (Shamma et al., 2009) and Health Care Reform corpus (Speriosu et al., 2011) or big and proprietary such as in (Lin and Kolcz, 2012) . Others rely on noisy labels obtained from emoticons and hashtags (Go et al., 2009) . The SemEval-2014 task 9: Sentiment Analysis in Twitter (Nakov et al., 2013) provides a public dataset to be used to compare the accuracy of different approaches.", |
| "cite_spans": [ |
| { |
| "start": 232, |
| "end": 253, |
| "text": "(Shamma et al., 2009)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 284, |
| "end": 307, |
| "text": "(Speriosu et al., 2011)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 342, |
| "end": 363, |
| "text": "(Lin and Kolcz, 2012)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 431, |
| "end": 448, |
| "text": "(Go et al., 2009)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 506, |
| "end": 526, |
| "text": "(Nakov et al., 2013)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we propose to analyse tweet sentiment with the use of Adaptive Boosting (Freund and Schapire, 1997) , making use of the well-known Multinomial Classifier. Boosting is an approach to machine learning that is based on the idea of creating a highly accurate prediction rule by combining many relatively weak and inaccurate rules. The AdaBoost algorithm (Freund and Schapire, 1997) was the first practical boosting algorithm, and remains one of the most widely used and studied, with applications in numerous fields. Therefore, it has potential to be very useful for tweet sentiment analysis, as we address in this paper.", |
| "cite_spans": [ |
| { |
| "start": 87, |
| "end": 114, |
| "text": "(Freund and Schapire, 1997)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 365, |
| "end": 392, |
| "text": "(Freund and Schapire, 1997)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Classifier ensembles for tweet sentiment analysis have been underexplored in the literature -a few exceptions are (Lin and Kolcz, 2012; Clark and Wicentwoski, 2013; Rodriguez et al., 2013; Hassan et al., 2013) . Lin and Kolcz (2012) used logistic regression classifiers learned from hashed byte 4grams as features -The feature extractor considers the tweet as a raw byte array. It moves a four-byte sliding window along the array, and hashes the contents of the bytes, the value of which was taken as the feature id. Here the 4-grams refers to four characters (and not to four words). They made no attempt to perform any linguistic processing, not even word tokenization. For each of the (proprietary) datasets, they experimented with ensembles of different sizes. The ensembles were formed by different models, obtained from different training sets, but with the same learning algorithm (logistic regression). Their results show that the ensembles lead to more accurate classifiers.", |
| "cite_spans": [ |
| { |
| "start": 114, |
| "end": 135, |
| "text": "(Lin and Kolcz, 2012;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 136, |
| "end": 164, |
| "text": "Clark and Wicentwoski, 2013;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 165, |
| "end": 188, |
| "text": "Rodriguez et al., 2013;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 189, |
| "end": 209, |
| "text": "Hassan et al., 2013)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 212, |
| "end": 232, |
| "text": "Lin and Kolcz (2012)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Rodr\u00edgues et al. 2013and Clark et al. (2013) proposed the use of classifier ensembles at the expression-level, which is related to Contextual Polarity Disambiguation. In this perspective, the sentiment label (positive, negative, or neutral) is applied to a specific phrase or word within the tweet and does not necessarily match the sentiment of the entire tweet.", |
| "cite_spans": [ |
| { |
| "start": 25, |
| "end": 44, |
| "text": "Clark et al. (2013)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Finally, another type of ensemble framework has been recently proposed by Hassan et al. (2013) , who deal with class imbalance, sparsity, and representational issues. The authors propose to enrich the corpus using multiple additional datasets related to the task of sentiment classification. Differently from previous works, the authors use a combination of unigrams and bigrams of simple words, partof-speech, and semantic features.", |
| "cite_spans": [ |
| { |
| "start": 74, |
| "end": 94, |
| "text": "Hassan et al. (2013)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "None of the previous works used AdaBoost (Freund and Schapire, 1996) . Also, lexicons and/or part-of-speech in combination with feature hashing, like in (Lin and Kolcz, 2012) have not been addressed in the literature.", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 68, |
| "text": "(Freund and Schapire, 1996)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 153, |
| "end": 174, |
| "text": "(Lin and Kolcz, 2012)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Boosting is a relatively young, yet extremely powerful, machine learning technique. The main idea behind boosting algorithms is to combine multiple weak learners -classification algorithms that perform only slightly better than random guessing -into a powerful composite classifier. Our focus is on the well known AdaBoost algorithm (Freund and Schapire, 1997) based on Multinomial Naive Bayes as base classifiers (Figure 1 ).", |
| "cite_spans": [ |
| { |
| "start": 333, |
| "end": 360, |
| "text": "(Freund and Schapire, 1997)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 414, |
| "end": 423, |
| "text": "(Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "AdaBoost Ensemble", |
| "sec_num": "3" |
| }, |
| { |
| "text": "AdaBoost and its variants have been applied to diverse domains with great success, owing to their solid theoretical foundation, accurate prediction, and great simplicity (Freund and Schapire, 1997) . For example, Viola and Jones (2001) used AdaBoost to face detection, Hao and Luo (2006) dealt with image segmentation, recognition of handwritten digits, and outdoor scene classification problems. In (Bloehdorn and Hotho, 2004) text classification is explored. ", |
| "cite_spans": [ |
| { |
| "start": 170, |
| "end": 197, |
| "text": "(Freund and Schapire, 1997)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 213, |
| "end": 235, |
| "text": "Viola and Jones (2001)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 269, |
| "end": 287, |
| "text": "Hao and Luo (2006)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 400, |
| "end": 427, |
| "text": "(Bloehdorn and Hotho, 2004)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "AdaBoost Ensemble", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The most commonly used text representation method adopted in the literature is known as Bag of Words (BOW) technique, where a document is considered as a BOW, and is represented by a feature vector containing all the words appearing in the corpus. In spite of BOW being simple and very effective in text classification, a large amount of information from the original document is not considered, word order is ruptured, and syntactic structures are broken. Therefore, sophisticated feature extraction methods with a deeper understanding of the documents are required for sentiment classification tasks. Instead of using only BOW, alternative ways to represent text, including Part of Speech (PoS) based features, feature hashing, and lexicons have been addressed in the literature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Engineering", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We implemented an ensemble of classifiers that receive as input data a combination of three features sets: i) lexicon features that captures the semantic aspect of a tweet; ii) feature hashing that captures the surface-form as abbreviations, slang terms from this type of social network, elongated words (for example, loveeeee), sentences with words without a space between them (for instance, Ilovveapple!), and so on; iii) and a specific syntactic features for tweets. Technical details of each feature set are provided in the sequel.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Engineering", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We use the sentimental lexicon provided by (Thelwall et al., 2010) and (Hu and Liu, 2004) . The former is known as SentiStrength and provides: an emotion vocabulary, an emoticons list (with positive, negative, and neutral icons), a negation list, and a booster word list. We use the negative list in cases where the next term in a sentence is an opinion word (either positive or negative). In such cases we have polarity inversion. For example, in the sentence \"The house is not beautiful\", the negative word \"not\" invert the polarity of the opinion word beautiful. The booster word list is composed by adverbs that suggest more or less emphasis in the sentiment. For example, in the sentence \"He was incredibly rude.\" the term \"incredibly\" is an adverb that lay emphasis on the opinion word \"rude\". Besides using SentiStrength, we use the lexicon approach proposed by (Hu and Liu, 2004) . In their approach, a list of words and associations with positive and negative sentiments has been provided that are very useful for sentiment analysis.", |
| "cite_spans": [ |
| { |
| "start": 43, |
| "end": 66, |
| "text": "(Thelwall et al., 2010)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 71, |
| "end": 89, |
| "text": "(Hu and Liu, 2004)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 869, |
| "end": 887, |
| "text": "(Hu and Liu, 2004)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicon Features", |
| "sec_num": null |
| }, |
| { |
| "text": "These two lexicons were used to build the first feature set according to Table 1 , where it is presented an example of tweet representation for the tweet 1 : \"The soccer team didn't play extremely bad last Wednesday.\" The word \"bad\" exists in the lexicon list of (Hu and Liu, 2004) , and it is a negative word. The word \"bad\" also exists in the negation list provided by (Thelwall et al., 2010) . The term \"didn't\" is a negative word according to SentiStrength (Thelwall et al., 2010) and there is a polarity inversion of the opinion words ahead. Finally, the term \"extremely\" belongs the booster word list and this word suggests more emphasis to the opinion words existing ahead. ", |
| "cite_spans": [ |
| { |
| "start": 263, |
| "end": 281, |
| "text": "(Hu and Liu, 2004)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 371, |
| "end": 394, |
| "text": "(Thelwall et al., 2010)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 461, |
| "end": 484, |
| "text": "(Thelwall et al., 2010)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 73, |
| "end": 80, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lexicon Features", |
| "sec_num": null |
| }, |
| { |
| "text": "Feature hashing has been introduced for text classification in (Shi et al., 2009) , (Weinberger et al., 2009) , (Forman and Kirshenbaum, 2008) , (Langford et al., 2007) , (Caragea et al., 2011) . In the context of tweet classification, feature hashing offers an approach to reducing the number of features provided as input to a learning algorithm. The original high-dimensional space is \"reduced\" by hashing the features into a lower-dimensional space, i.e., mapping features to hash keys. Thus, multiple features can be mapped to the same hash key, thereby \"aggregating\" their counts.", |
| "cite_spans": [ |
| { |
| "start": 63, |
| "end": 81, |
| "text": "(Shi et al., 2009)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 84, |
| "end": 109, |
| "text": "(Weinberger et al., 2009)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 112, |
| "end": 142, |
| "text": "(Forman and Kirshenbaum, 2008)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 145, |
| "end": 168, |
| "text": "(Langford et al., 2007)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 171, |
| "end": 193, |
| "text": "(Caragea et al., 2011)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature hashing", |
| "sec_num": null |
| }, |
| { |
| "text": "We used the MurmurHash3 function (SMHasher, 2010), that is a non-cryptographic hash function suitable for general hash-based lookup tables. It has been used for many purposes, and a recent approach that has emerged is its use for feature hashing or hashing trick. Instead of building and storing an explicit traditional bag-of-words with n-grams, the feature hashing uses a hash function to reduce the dimensionality of the output space and the length of this space (features) is explicitly fixed in advance. For this paper, we used this code (in Python):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature hashing", |
| "sec_num": null |
| }, |
| { |
| "text": "Code Listing 1: Murmurhash: from sklearn.utils.murmurhash import murmurhash3_bytes_u32 for w in \"i loveee apple\".split():", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature hashing", |
| "sec_num": null |
| }, |
| { |
| "text": "print(\"{0} => {1}\".format( w,murmurhash3_bytes_u32(w,0)%2 ** 10))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature hashing", |
| "sec_num": null |
| }, |
| { |
| "text": "The dimensionality is 2 * * 10, i.e 2 10 features. In this code the output is a hash code for each word \"w\" in the phrase \"i loveee apple\", i.e. i => 43, loveee => 381 and apple => 144. Table 2 shows an example of feature hashing representation. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 186, |
| "end": 193, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Feature hashing", |
| "sec_num": null |
| }, |
| { |
| "text": "1 2 3 4 \u2022 \u2022 \u2022 1024 class tweet 1 0 0 1 1 \u2022 \u2022 \u2022 0 positive tweet 2 0 1 0 3 \u2022 \u2022 \u2022 0 negative tweet 3 2 0 0 0 \u2022 \u2022 \u2022 0 positive . . . . . . . . . . . . . . . \u2022 \u2022 \u2022 . . . . . . tweet n 0 0 2 1 \u2022 \u2022 \u2022 0 neutral", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature hashing", |
| "sec_num": null |
| }, |
| { |
| "text": "We used the Part of Speech (PoS) tagged for tweets with the Twitter NLP tool (Gimpel et al., 2011) . It encompasses 25 tags including Nominal, Nominal plus Verbal, Other openclass words like adjectives, adverbs and interjection, Twitter specific tags such as hashtags, mention, discourse marker, just to name a few. A combination of lexicons, feature hashing, and part-of-speech is used to train the ensemble classifiers, thereby resulting in 1024 features from feature hashing, 3 features from lexicons, and 25 features from PoS.", |
| "cite_spans": [ |
| { |
| "start": 77, |
| "end": 98, |
| "text": "(Gimpel et al., 2011)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Specific syntactic (PoS) features", |
| "sec_num": null |
| }, |
| { |
| "text": "We conducted experiments by using the WEKA platform 1 . Table 4 shows the class distributions in training, development, and testing sets. Table 5 presents the results for positive and negative classes with the classifiers used in training set, and ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 56, |
| "end": 63, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 138, |
| "end": 145, |
| "text": "Table 5", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Setup and Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "From our results, we conclude that the use of AdaBoost provides good performance in the sentiment analysis (message-level subtask).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding Remarks", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In the cross-validation process, Multinomial Naive Bayes (MNB) has shown better results than Support Vector Machines (SVM) as a component for AdaBoost. However, we feel Table 6 : Results in the test sets -AdaBoost plus Multinomial Naive Bayes, which was the best algorithm in cross validation. that further investigations are necessary before making strong claims about this result.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 169, |
| "end": 176, |
| "text": "Table 6", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Concluding Remarks", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Overall, the SemEval Tasks have make evident the usual challenges when mining opinions from Social Media channels: noisy text, irregular grammar and orthography, highly specific lingo, and others. Moreover, temporal dependencies can affect the performance if the training and test data have been gathered at different.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding Remarks", |
| "sec_num": "6" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The authors would like to thank the Research Agencies CAPES, FAPESP, and CNPq for their financial support.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Predicting the future with social media", |
| "authors": [ |
| { |
| "first": "Sitaram", |
| "middle": [], |
| "last": "Asur", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Bernardo", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Huberman", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 2010 International Conference on", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sitaram Asur and Bernardo A. Huberman. 2010. Predicting the future with social media. In Pro- ceedings of the 2010 International Conference on", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "WI-IAT '10", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "01", |
| "issue": "", |
| "pages": "492--499", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Web Intelligence and Intelligent Agent Technology -Volume 01, WI-IAT '10, pages 492-499, Wash- ington, DC, USA. IEEE Computer Society.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Text classification by boosting weak learners based on terms and concepts", |
| "authors": [ |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Bloehdorn", |
| "suffix": "" |
| }, |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Hotho", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the Fourth IEEE International Conference on Data Mining", |
| "volume": "", |
| "issue": "", |
| "pages": "331--334", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephan Bloehdorn and Andreas Hotho. 2004. Text classification by boosting weak learners based on terms and concepts. In Proceedings of the Fourth IEEE International Conference on Data Mining, pages 331-334. IEEE Computer Society Press, November.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Protein sequence classification using feature hashing", |
| "authors": [ |
| { |
| "first": "Cornelia", |
| "middle": [], |
| "last": "Caragea", |
| "suffix": "" |
| }, |
| { |
| "first": "Adrian", |
| "middle": [], |
| "last": "Silvescu", |
| "suffix": "" |
| }, |
| { |
| "first": "Prasenjit", |
| "middle": [], |
| "last": "Mitra", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "BIBM", |
| "volume": "", |
| "issue": "", |
| "pages": "538--543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cornelia Caragea, Adrian Silvescu, and Prasen- jit Mitra. 2011. Protein sequence classifica- tion using feature hashing. In Fang-Xiang Wu, Mohammed Javeed Zaki, Shinichi Morishita, Yi Pan, Stephen Wong, Anastasia Christianson, and Xiaohua Hu, editors, BIBM, pages 538-543. IEEE.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Swatcs: Combining simple classifiers with estimated accuracy", |
| "authors": [ |
| { |
| "first": "Sam", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Rich", |
| "middle": [], |
| "last": "Wicentwoski", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", |
| "volume": "2", |
| "issue": "", |
| "pages": "425--429", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sam Clark and Rich Wicentwoski. 2013. Swatcs: Combining simple classifiers with estimated accuracy. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 425-429, Atlanta, Georgia, USA, June.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Characterizing debate performance via aggregated twitter sentiment", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Nicholas", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Diakopoulos", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Shamma", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '10", |
| "volume": "", |
| "issue": "", |
| "pages": "1195--1198", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nicholas A. Diakopoulos and David A. Shamma. 2010. Characterizing debate performance via aggregated twitter sentiment. In Proceedings of the SIGCHI Conference on Human Factors in Com- puting Systems, CHI '10, pages 1195-1198, New York, NY, USA. ACM.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Extremely fast text feature extraction for classification and indexing", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Forman", |
| "suffix": "" |
| }, |
| { |
| "first": "Evan", |
| "middle": [], |
| "last": "Kirshenbaum", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "CIKM '08: Proceeding of the 17th ACM conference on Information and knowledge management", |
| "volume": "", |
| "issue": "", |
| "pages": "1221--1230", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George Forman and Evan Kirshenbaum. 2008. Extremely fast text feature extraction for clas- sification and indexing. In CIKM '08: Proceed- ing of the 17th ACM conference on Information and knowledge management, pages 1221-1230, New York, NY, USA. ACM.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Experiments with a new boosting algorithm", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Freund", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "E" |
| ], |
| "last": "Schapire", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Thirteenth International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "148--156", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Freund and Robert E. Schapire. 1996. Ex- periments with a new boosting algorithm. In Thirteenth International Conference on Machine Learning, pages 148-156, San Francisco. Morgan Kaufmann.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A decision-theoretic generalization of on-line learning and an application to boosting", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Freund", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "E" |
| ], |
| "last": "Schapire", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Journal of Computer and System Sciences", |
| "volume": "55", |
| "issue": "1", |
| "pages": "119--139", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Freund and Robert E Schapire. 1997. A decision-theoretic generalization of on-line learning and an application to boosting. Jour- nal of Computer and System Sciences, 55(1):119 - 139.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Part-of-speech tagging for twitter: Annotation, features, and experiments", |
| "authors": [ |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Gimpel", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Schneider", |
| "suffix": "" |
| }, |
| { |
| "first": "O'", |
| "middle": [], |
| "last": "Brendan", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Connor", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Mills", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Eisenstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Dani", |
| "middle": [], |
| "last": "Heilman", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Yogatama", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Flanigan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics -Short Papers", |
| "volume": "2", |
| "issue": "", |
| "pages": "42--47", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for twitter: Annotation, features, and experiments. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics -Short Papers -Volume 2, HLT '11, pages 42-47, Stroudsburg, PA, USA.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Twitter sentiment classification using distant supervision. Processing", |
| "authors": [ |
| { |
| "first": "Alec", |
| "middle": [], |
| "last": "Go", |
| "suffix": "" |
| }, |
| { |
| "first": "Richa", |
| "middle": [], |
| "last": "Bhayani", |
| "suffix": "" |
| }, |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1--6", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision. Processing, pages 1-6.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Generalized Multiclass AdaBoost and Its Applications to Multimedia Classification", |
| "authors": [ |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Hao", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiebo", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Computer Vision and Pattern Recognition Workshop, 2006. CVPRW '06. Conference on", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wei Hao and Jiebo Luo. 2006. Generalized Multiclass AdaBoost and Its Applications to Multimedia Classification. In Computer Vision and Pattern Recognition Workshop, 2006. CVPRW '06. Conference on, page 113, Washington, DC, USA, June. IEEE.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Twitter sentiment analysis: A bootstrap ensemble framework", |
| "authors": [ |
| { |
| "first": "Ammar", |
| "middle": [], |
| "last": "Hassan", |
| "suffix": "" |
| }, |
| { |
| "first": "Ahmed", |
| "middle": [], |
| "last": "Abbasi", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Zeng", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "SocialCom", |
| "volume": "", |
| "issue": "", |
| "pages": "357--364", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ammar Hassan, Ahmed Abbasi, and Daniel Zeng. 2013. Twitter sentiment analysis: A bootstrap ensemble framework. In SocialCom, pages 357-364. IEEE.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Mining and summarizing customer reviews", |
| "authors": [ |
| { |
| "first": "Minqing", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '04", |
| "volume": "", |
| "issue": "", |
| "pages": "168--177", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceed- ings of the tenth ACM SIGKDD international con- ference on Knowledge discovery and data mining, KDD '04, pages 168-177, New York, NY, USA. ACM.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Twitter power: Tweets as electronic word of mouth", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Bernard", |
| "suffix": "" |
| }, |
| { |
| "first": "Mimi", |
| "middle": [], |
| "last": "Jansen", |
| "suffix": "" |
| }, |
| { |
| "first": "Kate", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Abdur", |
| "middle": [], |
| "last": "Sobel", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Chowdury", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "J. Am. Soc. Inf. Sci. Technol", |
| "volume": "60", |
| "issue": "11", |
| "pages": "2169--2188", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernard J. Jansen, Mimi Zhang, Kate Sobel, and Abdur Chowdury. 2009. Twitter power: Tweets as electronic word of mouth. J. Am. Soc. Inf. Sci. Technol., 60(11):2169-2188, nov.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Vowpal wabbit online learning project", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Langford", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Strehl", |
| "suffix": "" |
| }, |
| { |
| "first": "Lihong", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Langford, Alex Strehl, and Lihong Li. 2007. Vowpal wabbit online learning project. http: //mloss.org/software/view/53/.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Large-scale machine learning at twitter", |
| "authors": [ |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Alek", |
| "middle": [], |
| "last": "Kolcz", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data, SIGMOD '12", |
| "volume": "", |
| "issue": "", |
| "pages": "793--804", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jimmy Lin and Alek Kolcz. 2012. Large-scale ma- chine learning at twitter. In Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data, SIGMOD '12, pages 793- 804, New York, NY, USA. ACM.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Semeval-2013 task 2: Sentiment analysis in twitter", |
| "authors": [ |
| { |
| "first": "Preslav", |
| "middle": [], |
| "last": "Nakov", |
| "suffix": "" |
| }, |
| { |
| "first": "Sara", |
| "middle": [], |
| "last": "Rosenthal", |
| "suffix": "" |
| }, |
| { |
| "first": "Zornitsa", |
| "middle": [], |
| "last": "Kozareva", |
| "suffix": "" |
| }, |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Ritter", |
| "suffix": "" |
| }, |
| { |
| "first": "Theresa", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", |
| "volume": "2", |
| "issue": "", |
| "pages": "312--320", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Preslav Nakov, Sara Rosenthal, Zornitsa Kozareva, Veselin Stoyanov, Alan Ritter, and Theresa Wilson. 2013. Semeval-2013 task 2: Sentiment analysis in twitter. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Eval- uation (SemEval 2013), pages 312-320, Atlanta, Georgia, USA, June.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "From tweets to polls: Linking text sentiment to public opinion time series", |
| "authors": [ |
| { |
| "first": "O'", |
| "middle": [], |
| "last": "Brendan", |
| "suffix": "" |
| }, |
| { |
| "first": "Ramnath", |
| "middle": [], |
| "last": "Connor", |
| "suffix": "" |
| }, |
| { |
| "first": "Bryan", |
| "middle": [ |
| "R" |
| ], |
| "last": "Balasubramanyan", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Routledge", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "ICWSM'10", |
| "volume": "", |
| "issue": "", |
| "pages": "1--1", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brendan O'Connor, Ramnath Balasubramanyan, Bryan R. Routledge, and Noah A. Smith. 2010. From tweets to polls: Linking text sentiment to public opinion time series. In ICWSM'10, pages 1-1.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Fbm: Combining lexicon-based ml and heuristics for social media polarities", |
| "authors": [ |
| { |
| "first": "Jordi", |
| "middle": [], |
| "last": "Penagos Carlos Rodriguez", |
| "suffix": "" |
| }, |
| { |
| "first": "Joan", |
| "middle": [], |
| "last": "Atserias", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Codina-Filba", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Garc\u0131a-Narbona", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrik", |
| "middle": [], |
| "last": "Grivolla", |
| "suffix": "" |
| }, |
| { |
| "first": "Roser", |
| "middle": [], |
| "last": "Lambert", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Saur\u0131", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of SemEval-2013 -International Workshop on Semantic Evaluation Co-located with *Sem and NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "2013--2023", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Penagos Carlos Rodriguez, Jordi Atserias, Joan Codina-Filba, David Garc\u0131a-Narbona, Jens Grivolla, Patrik Lambert, and Roser Saur\u0131. 2013. Fbm: Combining lexicon-based ml and heuristics for social media polarities. In Pro- ceedings of SemEval-2013 -International Work- shop on Semantic Evaluation Co-located with *Sem and NAACL, Atlanta, Georgia. Url date at 2013- 10-10.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Tweet the debates: Understanding community annotation of uncollected sources", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [ |
| "A" |
| ], |
| "last": "Shamma", |
| "suffix": "" |
| }, |
| { |
| "first": "Lyndon", |
| "middle": [], |
| "last": "Kennedy", |
| "suffix": "" |
| }, |
| { |
| "first": "Elizabeth", |
| "middle": [ |
| "F" |
| ], |
| "last": "Churchill", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "WSM ?09: Proceedings of the international workshop on Workshop on", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David A. Shamma, Lyndon Kennedy, and Eliz- abeth F. Churchill. 2009. Tweet the debates: Understanding community annotation of un- collected sources. In In WSM ?09: Proceedings of the international workshop on Workshop on So- cial.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Hash kernels for structured data", |
| "authors": [ |
| { |
| "first": "Qinfeng", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Petterson", |
| "suffix": "" |
| }, |
| { |
| "first": "Gideon", |
| "middle": [], |
| "last": "Dror", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Langford", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Smola", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "V N" |
| ], |
| "last": "Vishwanathan", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "J. Mach. Learn. Res", |
| "volume": "10", |
| "issue": "", |
| "pages": "2615--2637", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qinfeng Shi, James Petterson, Gideon Dror, John Langford, Alex Smola, and S.V.N. Vish- wanathan. 2009. Hash kernels for structured data. J. Mach. Learn. Res., 10:2615-2637.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "The murmurhash family of hash functions", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Smhasher", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "SMHasher. 2010. The murmurhash family of hash functions.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Twitter polarity classification with label propagation over lexical links and the follower graph", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Speriosu", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikita", |
| "middle": [], |
| "last": "Sudan", |
| "suffix": "" |
| }, |
| { |
| "first": "Sid", |
| "middle": [], |
| "last": "Upadhyay", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Baldridge", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the First Workshop on Unsupervised Learning in NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "53--63", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Speriosu, Nikita Sudan, Sid Upadhyay, and Jason Baldridge. 2011. Twitter polarity classification with label propagation over lexi- cal links and the follower graph. In Proceedings of the First Workshop on Unsupervised Learning in NLP, pages 53-63, Stroudsburg, PA, USA.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Sentiment in short strength detection informal text", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Thelwall", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevan", |
| "middle": [], |
| "last": "Buckley", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Thelwall, Kevan Buckley, Georgios Pal- toglou, Di Cai, and Arvid Kappas. 2010. Senti- ment in short strength detection informal text.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Robust realtime object detection", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Viola", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "In International Journal of Computer Vision", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul Viola and Michael Jones. 2001. Robust real- time object detection. In International Journal of Computer Vision.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Feature hashing for large scale multitask learning", |
| "authors": [ |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Kilian", |
| "suffix": "" |
| }, |
| { |
| "first": "Anirban", |
| "middle": [], |
| "last": "Weinberger", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Dasgupta", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [ |
| "J" |
| ], |
| "last": "Langford", |
| "suffix": "" |
| }, |
| { |
| "first": "Josh", |
| "middle": [], |
| "last": "Smola", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Attenberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "ACM International Conference Proceeding Series", |
| "volume": "382", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kilian Q. Weinberger, Anirban Dasgupta, John Langford, Alexander J. Smola, and Josh Atten- berg. 2009. Feature hashing for large scale mul- titask learning. In Andrea Pohoreckyj Dany- luk, L Bottou, and Michael L. Littman, editors, ICML, volume 382 of ACM International Confer- ence Proceeding Series, page 140. ACM.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Learning from bullying traces in social media", |
| "authors": [ |
| { |
| "first": "Jun-Ming", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kwang-Sung", |
| "middle": [], |
| "last": "Jun", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaojin", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Amy", |
| "middle": [], |
| "last": "Bellmore", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "656--666", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jun-Ming Xu, Kwang-Sung Jun, Xiaojin Zhu, and Amy Bellmore. 2012. Learning from bullying traces in social media. In HLT-NAACL, pages 656-666.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Figure 1: AdaBoost Approach" |
| }, |
| "TABREF0": { |
| "content": "<table/>", |
| "type_str": "table", |
| "num": null, |
| "text": "Representing Twitter messages with lexicons.", |
| "html": null |
| }, |
| "TABREF1": { |
| "content": "<table/>", |
| "type_str": "table", |
| "num": null, |
| "text": "Representing Twitter messages with feature hashing.", |
| "html": null |
| }, |
| "TABREF2": { |
| "content": "<table><tr><td>class</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "text": "shows an example of syntactic features representation.tag 1 tag 2 tag 3 tag 4 \u2022 \u2022 \u2022 tag 25", |
| "html": null |
| }, |
| "TABREF3": { |
| "content": "<table><tr><td>: Representing Twitter messages with</td></tr><tr><td>syntactic features.</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "text": "", |
| "html": null |
| }, |
| "TABREF4": { |
| "content": "<table><tr><td/><td/><td>Training Set</td><td/><td/></tr><tr><td>Set</td><td>Positive</td><td>Negative</td><td>Neutral</td><td>Total</td></tr><tr><td>Train</td><td colspan=\"4\">3,640 (37%) 1,458 (15%) 4,586 (48%) 9,684</td></tr><tr><td/><td colspan=\"2\">Development Set</td><td/><td/></tr><tr><td>Set</td><td>Positive</td><td>Negative</td><td>Neutral</td><td>Total</td></tr><tr><td>Dev</td><td>575 (35%)</td><td>340(20%)</td><td>739 (45%)</td><td>1,654</td></tr><tr><td/><td/><td>Testing Sets</td><td/><td/></tr><tr><td>Set</td><td>Positive</td><td>Negative</td><td>Neutral</td><td>Total</td></tr><tr><td>LiveJournal</td><td>427 (37%)</td><td>304 (27%)</td><td>411 (36%)</td><td>1,142</td></tr><tr><td>SMS2013</td><td>492 (23%)</td><td>394(19%)</td><td colspan=\"2\">1,207 (58%) 2,093</td></tr><tr><td>Twitter2013</td><td colspan=\"2\">1,572 (41%) 601 (16%)</td><td colspan=\"2\">1,640 (43%) 3,813</td></tr><tr><td>Twitter2014</td><td>982 (53%)</td><td>202 (11%)</td><td>669 (36%)</td><td>1,853</td></tr><tr><td colspan=\"2\">Twitter2014Sar 33 (38%)</td><td>40 (47%)</td><td>13 (15%)</td><td>86</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "text": "shows the computed results by SemEval organizers in the test sets.", |
| "html": null |
| }, |
| "TABREF5": { |
| "content": "<table><tr><td>: Class distributions in the training set</td></tr><tr><td>(Train), development set (Dev) and testing set</td></tr><tr><td>(Test).</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "text": "", |
| "html": null |
| }, |
| "TABREF7": { |
| "content": "<table><tr><td/><td colspan=\"2\">Scoring LiveJournal2014</td><td/></tr><tr><td>class</td><td colspan=\"3\">precision recall F-measure</td></tr><tr><td>positive</td><td>69.79</td><td>64.92</td><td>67.27</td></tr><tr><td>negative</td><td>76.64</td><td>61.64</td><td>68.33</td></tr><tr><td>neutral</td><td>51.82</td><td>69.84</td><td>59.50</td></tr><tr><td/><td colspan=\"2\">overall score : 67.80</td><td/></tr><tr><td/><td colspan=\"2\">Scoring SMS2013</td><td/></tr><tr><td>positive</td><td>61.99</td><td>46.78</td><td>53.32</td></tr><tr><td>negative</td><td>72.34</td><td>42.86</td><td>53.82</td></tr><tr><td>neutral</td><td>53.85</td><td>83.76</td><td>65.56</td></tr><tr><td/><td colspan=\"2\">overall score : 53.57</td><td/></tr><tr><td/><td colspan=\"2\">Scoring Twitter2013</td><td/></tr><tr><td>positive</td><td>68.07</td><td>66.13</td><td>67.08</td></tr><tr><td>negative</td><td>48.09</td><td>50.00</td><td>49.02</td></tr><tr><td>neutral</td><td>67.20</td><td>68.15</td><td>67.67</td></tr><tr><td/><td colspan=\"2\">overall score : 58.05</td><td/></tr><tr><td/><td colspan=\"2\">Scoring Twitter2014</td><td/></tr><tr><td>positive</td><td>65.17</td><td>70.48</td><td>67.72</td></tr><tr><td>negative</td><td>53.47</td><td>48.21</td><td>50.70</td></tr><tr><td>neutral</td><td>59.94</td><td>55.62</td><td>57.70</td></tr><tr><td/><td colspan=\"2\">overall score : 59.21</td><td/></tr><tr><td colspan=\"4\">Scoring Twitter2014Sarcasm</td></tr><tr><td>positive</td><td>63.64</td><td>44.68</td><td>52.50</td></tr><tr><td>negative</td><td>22.50</td><td>75.00,</td><td>34.62</td></tr><tr><td>neutral</td><td>76.92</td><td>37.04</td><td>50.00</td></tr><tr><td/><td colspan=\"2\">overall score : 43.56</td><td/></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "text": "Results from 10-fold cross validation in the training set with default parameters of Weka. MNB and SVM stand for Multinomial Naive Bayes and Support Vector Machine, respectively.", |
| "html": null |
| } |
| } |
| } |
| } |