ACL-OCL / Base_JSON /prefixS /json /S16 /S16-1021.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S16-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:26:57.868605Z"
},
"title": "CICBUAPnlp at SemEval-2016 Task 4-A: Discovering Twitter Polarity using Enhanced Embeddings",
"authors": [
{
"first": "Helena",
"middle": [],
"last": "G\u00f3mez-Adorno",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Grigori",
"middle": [],
"last": "Sidorov",
"suffix": "",
"affiliation": {},
"email": "sidorov@cic.ipn.mx"
},
{
"first": "Darnes",
"middle": [],
"last": "Vilari\u00f1o",
"suffix": "",
"affiliation": {},
"email": "darnes@cs.buap.mx"
},
{
"first": "David",
"middle": [],
"last": "Pinto",
"suffix": "",
"affiliation": {},
"email": "dpinto@cs.buap.mx"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents our approach for SemEval 2016 task 4: Sentiment Analysis in Twitter. We participated in Subtask A: Message Polarity Classification. The aim is to classify Twitter messages into positive, neutral, and negative polarity. We used a lexical resource for pre-processing of social media data and train a neural network model for feature representation. Our resource includes dictionaries of slang words, contractions, abbreviations, and emoticons commonly used in social media. For the classification process, we pass the features obtained in an unsupervised manner into an SVM classifier.",
"pdf_parse": {
"paper_id": "S16-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents our approach for SemEval 2016 task 4: Sentiment Analysis in Twitter. We participated in Subtask A: Message Polarity Classification. The aim is to classify Twitter messages into positive, neutral, and negative polarity. We used a lexical resource for pre-processing of social media data and train a neural network model for feature representation. Our resource includes dictionaries of slang words, contractions, abbreviations, and emoticons commonly used in social media. For the classification process, we pass the features obtained in an unsupervised manner into an SVM classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this paper, we describe our approach for the Se-mEval 2016 task 4 \"Sentiment Analysis in Twitter\" subtask A (Nakov et al., 2016) , where the goal is to classify a tweet message as either positive, neutral, or negative. The main goal of our approach is to improve the feature representation obtained by a well-known neural network method-Doc2vec (Le and Mikolov, 2014) , using dictionaries of abbreviations, contractions, slang words, and emoticons.",
"cite_spans": [
{
"start": 111,
"end": 131,
"text": "(Nakov et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 356,
"end": 370,
"text": "Mikolov, 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Approaches based on neural networks for unsupervised feature representation (or embeddings) often do not perform data cleaning (Le and Mikolov, 2014; Socher et al., 2011) , considering that the network itself would solve the related problems. These approaches treat special characters such as ,.!?# and user mentions as a regular word (Le and Mikolov, 2014; Brigadir et al., 2014) . Still, in some works which use embeddings a basic data cleaning process (i.e., stopwords removal, URL filtering, and removal of rare terms) improves the feature representation and, consequently, the performance of the classification task (Yan et al., 2014; Rangarajan Sridhar, 2015; Jiang et al., 2014) .",
"cite_spans": [
{
"start": 135,
"end": 149,
"text": "Mikolov, 2014;",
"ref_id": "BIBREF8"
},
{
"start": 150,
"end": 170,
"text": "Socher et al., 2011)",
"ref_id": "BIBREF15"
},
{
"start": 343,
"end": 357,
"text": "Mikolov, 2014;",
"ref_id": "BIBREF8"
},
{
"start": 358,
"end": 380,
"text": "Brigadir et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 621,
"end": 639,
"text": "(Yan et al., 2014;",
"ref_id": "BIBREF17"
},
{
"start": 640,
"end": 665,
"text": "Rangarajan Sridhar, 2015;",
"ref_id": "BIBREF13"
},
{
"start": 666,
"end": 685,
"text": "Jiang et al., 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The problem with the content of social media messages is that they usually have a lot of nonstandard language expressions (Pinto et al., 2012; Atkinson et al., 2013) . Due to the short nature of the messages, most of the users use a large vocabulary of slang words, abbreviations, and emoticons (Das and Bandyopadhyay, 2011) . Slang words are not considered as a part of the standard vocabulary of a language, and they are mostly used in informal messages, while abbreviations are shortened forms of a word or name that are used in order to replace the full forms. Emoticons usually convey the current feeling of the message writer.",
"cite_spans": [
{
"start": 122,
"end": 142,
"text": "(Pinto et al., 2012;",
"ref_id": "BIBREF11"
},
{
"start": 143,
"end": 165,
"text": "Atkinson et al., 2013)",
"ref_id": "BIBREF0"
},
{
"start": 295,
"end": 324,
"text": "(Das and Bandyopadhyay, 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For this task we propose a preprocessing phase using the dictionaries that we previously built for the task of Authorship Atribution (Posadas-Dur\u00e1n et al., 2015) . These dictionaries are useful for preprocessing and cleaning messages obtained from several social networks, such as Facebook, Google+, Instagram, etc.",
"cite_spans": [
{
"start": 133,
"end": 161,
"text": "(Posadas-Dur\u00e1n et al., 2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is structured as follows. Section 2 describes related work. Section 3 introduces the social media lexical resource used for this work. Section 4 presents our proposed approach. Section 5 presents the evaluation of the task using the neural network based feature representation. Finally, Section 6 draws the conclusions from our experiments and points out the possible directions of future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are many works that tackle the problem of social media texts pre-processing (Baldwin, 2012; Clark and Araki, 2011; Das and Bandyopadhyay, 2011) ; however, to the best of our knowledge, the research based on neural network for feature representation did not consider the effect that data cleaning have on the quality of the representation (specially on social media data).",
"cite_spans": [
{
"start": 82,
"end": 97,
"text": "(Baldwin, 2012;",
"ref_id": "BIBREF1"
},
{
"start": 98,
"end": 120,
"text": "Clark and Araki, 2011;",
"ref_id": "BIBREF3"
},
{
"start": 121,
"end": 149,
"text": "Das and Bandyopadhyay, 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Several approaches have been proposed for vector-space distributed representations of words and phrases. These models are used mainly for predicting a word given a surrounding context. However, most of the authors indicate that distributed representations of words and phrases can also capture syntactic and semantic similarity or relatedness (Le and Mikolov, 2014; Socher et al., 2013; Mikolov et al., 2013) . This particular behaviour makes these methods attractive to solve several NLP tasks, nevertheless, at the same time, it raises new issues, such as dealing with unnormalized texts, which are typically present in social media forums such as Twitter, Facebook, Instagram, among others. Researchers have proposed several pre-processing steps in order to overcome this issue, which led to an overall performance increase. Yan et al. (Yan et al., 2014) obtained almost 2% increase using standard NLP pre-processing, which consists in tokenization, lowercasing, removing stopwords and rare terms. Kumar et al. (Rangarajan Sridhar, 2015) focused on the spelling issues in social media messages, which includes repeated letters, omitted vowels, use of phonetic spellings, substitution of letters with numbers (typically syllables), use of shorthands and user created abbreviations for phrases. In a data-driven approach, Brigadir et al. (Brigadir et al., 2014) apply URL filtering combined with standard NLP preprocessing techniques.",
"cite_spans": [
{
"start": 351,
"end": 365,
"text": "Mikolov, 2014;",
"ref_id": "BIBREF8"
},
{
"start": 366,
"end": 386,
"text": "Socher et al., 2013;",
"ref_id": "BIBREF16"
},
{
"start": 387,
"end": 408,
"text": "Mikolov et al., 2013)",
"ref_id": "BIBREF9"
},
{
"start": 839,
"end": 857,
"text": "(Yan et al., 2014)",
"ref_id": "BIBREF17"
},
{
"start": 1339,
"end": 1362,
"text": "(Brigadir et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We developed the dictionaries with the aim of preprocessing tweets for the author profiling task at PAN 2015 (Posadas-Dur\u00e1n et al., 2015) . First, we reviewed the tweets present in the PAN corpus and found excessive use of shortened vocabulary, which can be divided into three categories: slang words, abbreviations, and contractions. Moreover, we came across a large number of emoticons, which are a typographic display of a facial representation. The lexical resource was originally built for 4 languages, but for the purposes of this work we only use the English dictionary. The statistics for the English dictionary are presented in Table 1 . The dictionaries are freely available on our website 1 .",
"cite_spans": [
{
"start": 109,
"end": 137,
"text": "(Posadas-Dur\u00e1n et al., 2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 637,
"end": 644,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Resources",
"sec_num": "3"
},
{
"text": "From a machine learning point of view, the Message Polarity Classification task can be considered as a supervised multi-class classification problem, where a set of tweets T = {t 1 , t 2 , . . . , t i } is given, and each sample is assigned to one of the target classes {positive, negative, neutral}. So, the problem is to build a classifier F that assigns a sentiment class to unclassified tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to Sentiment Classification",
"sec_num": "4"
},
{
"text": "Since the tweets are very noisy, we perform the preprocessing over each dataset (train, unlabeled and test). In the preprocessing phase, we executed the following steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to Sentiment Classification",
"sec_num": "4"
},
{
"text": "Expand slang words and abbreviations Not all tweets use slang words and abbreviation in the same way. There are Twitter users that do not use slang words and due to this reason we expanded all slang words and abbreviations with their full meaning using the dictionaries described in section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to Sentiment Classification",
"sec_num": "4"
},
{
"text": "Remove url ULR do not provide information about the sentiment of the tweet and because of this reason every ULR is removed from the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to Sentiment Classification",
"sec_num": "4"
},
{
"text": "Remove hashtags symbols Hashtags in tweets carry useful information about the topic and polarity of the message. We only remove the hashtag symbol, keeping the words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to Sentiment Classification",
"sec_num": "4"
},
{
"text": "Remove emoticons In order to obtain a distributed representation of a tweet, we used only words and punctuation symbols. So, unlike traditional preprocessing for sentiment analysis we removed the emoticons from tweets by looking up in our emoticons dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to Sentiment Classification",
"sec_num": "4"
},
{
"text": "For training, a vector representation of each tweet is obtained in an unsupervised manner by a neural network based model, i.e., v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to Sentiment Classification",
"sec_num": "4"
},
{
"text": "i = {v 1 , v 2 , . . . , v j }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to Sentiment Classification",
"sec_num": "4"
},
{
"text": "where v i is the vector representation of the tweet t i . In order to obtain the vector representation of the tweets, a neural network based distributed representation model is trained using the doc2vec algorithm (Le and Mikolov, 2014) . It is an unsupervised algorithm that aggregates all the words in a sentence (of variable length) into a vector of fixed length. The algorithm takes into account the contexts of words, and it is able to capture the semantics of the input texts. We used a freely available implementation of doc2vec included in the Gensim 2 python module. The doc2vec model is trained with both labeled and unlabeled tweets in order to learn the distributed representation. The learned vector representations have 300 dimentions, we set the windows size to 3 and minimal word frequency is set to 2. Then, a classifier is trained using the vector representations of the labeled tweets. We perform the experiments with the SVM liblinear classifier (Fan et al., 2008) , especifically the LinearSVC algorithm the implemented in the Scikit Learn 3 python module with default parameters.",
"cite_spans": [
{
"start": 221,
"end": 235,
"text": "Mikolov, 2014)",
"ref_id": "BIBREF8"
},
{
"start": 965,
"end": 983,
"text": "(Fan et al., 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to Sentiment Classification",
"sec_num": "4"
},
{
"text": "For the evaluation, the vector representations of the test tweets are obtained retraining the doc2vec model built in the training stage, plus the test tweets. Finally, the vector representation of the tweets are passed to the SVM model in order to assign the corresponding polarity label to each tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to Sentiment Classification",
"sec_num": "4"
},
{
"text": "We used the train set of SemEval-2014 Task 9: Sentiment Analysis in Twitter -subtask B (Rosenthal et al., 2014) , consisting of 6124 tweets (removing the tweets with the objective class). Besides, we expanded the training set with some tweets of this year training set (the ones we could download) and with Stanford Sentiment Analysis Dataset (Go et al., 2009) . So, in total we employed 11377 classified tweets for training. For the neural network based feature representation we used the 1.7 millons unlabeled tweets for training the Doc2Vec model.",
"cite_spans": [
{
"start": 87,
"end": 111,
"text": "(Rosenthal et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 343,
"end": 360,
"text": "(Go et al., 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to Sentiment Classification",
"sec_num": "4"
},
{
"text": "In this section we present the results obtained in the competition when various test datasets are used. The evaluation metric used in the competition is the macro-averaged F measure calculated over the positive and negative classes. Table 2 presents the overall performance of our approach for different datasets. It can be observed that our approach overcome the baseline for almost all datasets. ",
"cite_spans": [],
"ref_spans": [
{
"start": 233,
"end": 240,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We presented our results for sentiment analysis on Twitter. We rely on a supervised approach, which is based on top of a deep learning system enhanced with special preprocesing techniques using a lexical social media resource. We reported the overall accuracy for the sentiment classification task in three classes: positive, negative and neutral. In the future, we will improve our preprocessing phase by removing the target mentions, numbers and repeated sequences of characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "ican Government (CONACYT project 240844, SNI, COFAA-IPN and SIP-IPN 20151406, 20161947) .",
"cite_spans": [
{
"start": 46,
"end": 87,
"text": "COFAA-IPN and SIP-IPN 20151406, 20161947)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "http://www.cic.ipn.mx/\u02dcsidorov/lexicon. zip",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://radimrehurek.com/gensim/ 3 http://scikit-learn.org/stable/index. html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was done under the support of the \"Red Tem\u00e1tica en Tecnolog\u00edas del Lenguaje\", Mex-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A semantically-based lattice approach for assessing patterns in text mining tasks",
"authors": [
{
"first": "John",
"middle": [],
"last": "Atkinson",
"suffix": ""
},
{
"first": "Alejandro",
"middle": [],
"last": "Figueroa",
"suffix": ""
},
{
"first": "Claudio",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
}
],
"year": 2013,
"venue": "Computaci\u00f3n y Sistemas",
"volume": "17",
"issue": "4",
"pages": "467--476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Atkinson, Alejandro Figueroa, and Claudio P\u00e9rez. 2013. A semantically-based lattice approach for as- sessing patterns in text mining tasks. Computaci\u00f3n y Sistemas, 17(4):467-476.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Social media: Friend or foe of natural language processing",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2012,
"venue": "The 26th Pacific Asia Conference on Language, Information and Computation",
"volume": "",
"issue": "",
"pages": "58--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Baldwin. 2012. Social media: Friend or foe of natural language processing? In The 26th Pacific Asia Conference on Language, Information and Computa- tion, pages 58-59.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Adaptive Representations for Tracking Breaking News on Twitter. ArXiv e-prints",
"authors": [
{
"first": "I",
"middle": [],
"last": "Brigadir",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Greene",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Cunningham",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Brigadir, D. Greene, and P. Cunningham. 2014. Adap- tive Representations for Tracking Breaking News on Twitter. ArXiv e-prints, March.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Text normalization in social media: Progress, problems and applications for a pre-processing system of casual english",
"authors": [
{
"first": "Eleanor",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Araki",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics and Related Fields",
"volume": "27",
"issue": "",
"pages": "2--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eleanor Clark and Kenji Araki. 2011. Text normalization in social media: Progress, problems and applications for a pre-processing system of casual english. Proce- dia -Social and Behavioral Sciences, 27:2 -11. Com- putational Linguistics and Related Fields.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Document Level Emotion Tagging: Machine Learning and Resource Based Approach",
"authors": [
{
"first": "Dipankar",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Sivaji",
"middle": [],
"last": "Bandyopadhyay",
"suffix": ""
}
],
"year": 2011,
"venue": "Computaci\u00f3n y Sistemas",
"volume": "15",
"issue": "",
"pages": "221--234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dipankar Das and Sivaji Bandyopadhyay. 2011. Docu- ment Level Emotion Tagging: Machine Learning and Resource Based Approach. Computaci\u00f3n y Sistemas, 15:221 -234.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Liblinear: A library for large linear classification",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Rong-En Fan",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Xiang-Rui",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2008,
"venue": "J. Mach. Learn. Res",
"volume": "9",
"issue": "",
"pages": "1871--1874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. J. Mach. Learn. Res., 9:1871-1874, June.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Twitter sentiment classification using distant supervision. Processing",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Go",
"suffix": ""
},
{
"first": "Richa",
"middle": [],
"last": "Bhayani",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Go, Richa Bhayani, and Lei Huang. 2009. Twit- ter sentiment classification using distant supervision. Processing, pages 1-6.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Microblog sentiment analysis with emoticon space model",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Yiqun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoping",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2014,
"venue": "Social Media Processing",
"volume": "",
"issue": "",
"pages": "76--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Jiang, Yiqun Liu, Huanbo Luan, Min Zhang, and Shaoping Ma. 2014. Microblog sentiment analysis with emoticon space model. In Social Media Process- ing, pages 76-87. Springer.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1405.4053"
]
},
"num": null,
"urls": [],
"raw_text": "Quoc V Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. arXiv preprint arXiv:1405.4053.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. CoRR, abs/1301.3781.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "SemEval-2016 task 3: Community question answering",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Walid",
"middle": [],
"last": "Magdy",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Jim",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": "Bilal",
"middle": [],
"last": "Randeree",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval'16",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Llu\u00eds M\u00e0rquez, Walid Magdy, Alessan- dro Moschitti, Jim Glass, and Bilal Randeree. 2016. SemEval-2016 task 3: Community question an- swering. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval'16, San Diego, California, June. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The soundex phonetic algorithm revisited for sms-based information retrieval",
"authors": [
{
"first": "David",
"middle": [],
"last": "Pinto",
"suffix": ""
},
{
"first": "Darnes",
"middle": [],
"last": "Vilari\u00f1o-Ayala",
"suffix": ""
},
{
"first": "Yuridiana",
"middle": [],
"last": "Alem\u00e1n",
"suffix": ""
},
{
"first": "Helena",
"middle": [],
"last": "G\u00f3mez-Adorno",
"suffix": ""
},
{
"first": "Nahun",
"middle": [],
"last": "Loya",
"suffix": ""
},
{
"first": "H\u00e9ctor Jim\u00e9nez-",
"middle": [],
"last": "Salazar",
"suffix": ""
}
],
"year": 2012,
"venue": "II Spanish Conference on Information Retrieval CERI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Pinto, Darnes Vilari\u00f1o-Ayala, Yuridiana Alem\u00e1n, Helena G\u00f3mez-Adorno, Nahun Loya, and H\u00e9ctor Jim\u00e9nez-Salazar. 2012. The soundex phonetic algo- rithm revisited for sms-based information retrieval. In II Spanish Conference on Information Retrieval CERI 2012.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Syntactic n-grams as features for the author profiling task: Notebook for PAN at CLEF 2015",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Pablo Posadas-Dur\u00e1n",
"suffix": ""
},
{
"first": "Helena",
"middle": [],
"last": "G\u00f3mez-Adorno",
"suffix": ""
},
{
"first": "Ilia",
"middle": [],
"last": "Markov",
"suffix": ""
},
{
"first": "Grigori",
"middle": [],
"last": "Sidorov",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Ildar",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"F"
],
"last": "Batyrshin",
"suffix": ""
},
{
"first": "Obdulia",
"middle": [],
"last": "Gelbukh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pichardo-Lagunas",
"suffix": ""
}
],
"year": 2015,
"venue": "Working Notes of CLEF 2015 -Conference and Labs of the Evaluation forum",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan Pablo Posadas-Dur\u00e1n, Helena G\u00f3mez-Adorno, Ilia Markov, Grigori Sidorov, Ildar Z. Batyrshin, Alexan- der F. Gelbukh, and Obdulia Pichardo-Lagunas. 2015. Syntactic n-grams as features for the author profiling task: Notebook for PAN at CLEF 2015. In Working Notes of CLEF 2015 -Conference and Labs of the Evaluation forum, Toulouse, France, September 8-11, 2015.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Unsupervised text normalization using distributed representations of words and phrases",
"authors": [
{
"first": "Vivek",
"middle": [],
"last": "Kumar Rangarajan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sridhar",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "8--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vivek Kumar Rangarajan Sridhar. 2015. Unsupervised text normalization using distributed representations of words and phrases. In Proceedings of the 1st Work- shop on Vector Space Modeling for Natural Language Processing, pages 8-16, Denver, Colorado, June. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semeval-2014 task 9: Sentiment analysis in twitter",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Rosenthal, Alan Ritter, Preslav Nakov, and Veselin Stoyanov. 2014. Semeval-2014 task 9: Sentiment analysis in twitter. In Proceedings of the 8th Inter- national Workshop on Semantic Evaluation (SemEval 2014), pages 73-80, Dublin, Ireland, August. Associ- ation for Computational Linguistics and Dublin City University.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Parsing natural scenes and natural language with recursive neural networks",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cliff",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 28th international conference on machine learning (ICML-11)",
"volume": "",
"issue": "",
"pages": "129--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Cliff C Lin, Chris Manning, and An- drew Y Ng. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceed- ings of the 28th international conference on machine learning (ICML-11), pages 129-136.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [
"Y"
],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the conference on empirical methods in natural language processing",
"volume": "1631",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical meth- ods in natural language processing (EMNLP), volume 1631, page 1642.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Drws: A model for learning distributed representations for words and sentences",
"authors": [
{
"first": "Chunwei",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Fan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Lian'en",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2014,
"venue": "PRICAI 2014: Trends in Artificial Intelligence",
"volume": "8862",
"issue": "",
"pages": "196--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chunwei Yan, Fan Zhang, and Lian'en Huang. 2014. Drws: A model for learning distributed representa- tions for words and sentences. In Duc-Nghia Pham and Seong-Bae Park, editors, PRICAI 2014: Trends in Artificial Intelligence, volume 8862 of Lecture Notes in Computer Science, pages 196-207. Springer Inter- national Publishing.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"text": "Number of entries of the English dictionary",
"content": "<table><tr><td colspan=\"2\">Type of Dictionary English</td></tr><tr><td>Abbreviations</td><td>1,346</td></tr><tr><td>Contractions</td><td>131</td></tr><tr><td>Slang words</td><td>1,249</td></tr><tr><td>Emoticons</td><td>482</td></tr><tr><td>Total</td><td>3,208</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF1": {
"text": "Obtained results for 2016 Test and Progress",
"content": "<table><tr><td>Year Corpus</td><td colspan=\"2\">Ours Baseline score</td></tr><tr><td>2013 Tweet</td><td>0.194</td><td>0.292</td></tr><tr><td>SMS</td><td>0.193</td><td>0.190</td></tr><tr><td>2014 Tweet</td><td>0.335</td><td>0.346</td></tr><tr><td colspan=\"2\">Tweet Sarcasm 0.393</td><td>0.277</td></tr><tr><td>Live-Journal</td><td>0.326</td><td>0.272</td></tr><tr><td>2015 Tweet</td><td>0.303</td><td>0.303</td></tr><tr><td>2016 Tweet</td><td>0.303</td><td>0.255</td></tr></table>",
"num": null,
"type_str": "table",
"html": null
}
}
}
}