| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:42:47.514847Z" |
| }, |
| "title": "Sarcasm Detection in Tweets with BERT and GloVe Embeddings", |
| "authors": [ |
| { |
| "first": "Akshay", |
| "middle": [], |
| "last": "Khatri", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "National Institute of Technology Karnataka", |
| "location": { |
| "settlement": "Surathkal" |
| } |
| }, |
| "email": "akshaykhatri0011@gmail.com" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Pranav", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "hsr.pranav@gmail.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Sarcasm is a form of communication in which the person states opposite of what he actually means. It is ambiguous in nature. In this paper, we propose using machine learning techniques with BERT and GloVe embeddings to detect sarcasm in tweets. The dataset is preprocessed before extracting the embeddings. The proposed model also uses the context in which the user is reacting to along with his actual response.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Sarcasm is a form of communication in which the person states opposite of what he actually means. It is ambiguous in nature. In this paper, we propose using machine learning techniques with BERT and GloVe embeddings to detect sarcasm in tweets. The dataset is preprocessed before extracting the embeddings. The proposed model also uses the context in which the user is reacting to along with his actual response.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Sarcasm is defined as a sharp, bitter or cutting expression or remark and is sometimes ironic (Gibbs et al., 1994) . To identify if a sentence is sarcastic, it requires analyzing the speaker's intentions. Different kinds of sarcasm exist like propositional, embedded, like-prefixed and illocutionary (Camp, 2012) . Among these, propositional requires the use of context.", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 114, |
| "text": "(Gibbs et al., 1994)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 300, |
| "end": 312, |
| "text": "(Camp, 2012)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The most common formulation of sarcasm detection is a classification task (Joshi et al., 2017) . Our task is to determine whether a given sentence is sarcastic or not. Sarcasm detection approaches are broadly classified into three types (Joshi et al., 2017) . They are: Rule based, deep learning based and statistical based. Rule based detectors are simple, they just look for negative response in a positive context and vice versa. It can be done using sentiment analysis. Deep learning based approaches use deep learning to extract features and the extracted features are fed into a classifier to get the result. Statistical approach use features related to the text like unigrams, bigrams etc and are fed to SVM classifier.", |
| "cite_spans": [ |
| { |
| "start": 74, |
| "end": 94, |
| "text": "(Joshi et al., 2017)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 237, |
| "end": 257, |
| "text": "(Joshi et al., 2017)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we use BERT embeddings (Devlin et al., 2018) and GloVe embeddings (Pennington et al., 2014) as features. They are used for getting vector representation of words. These embeddings are trained with a machine learning algorithm. Before extracting the embeddings, the dataset also needs to be processed to enhance the quality of the data supplied to the model.", |
| "cite_spans": [ |
| { |
| "start": 38, |
| "end": 59, |
| "text": "(Devlin et al., 2018)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 81, |
| "end": 106, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There have been many methods for sarcasm detection. We discuss some of them in this section. Under rule based approaches, Maynard and Greenwood (2014) use hashtag sentiment to identify sarcasm. The disagreement of the sentiment expressed by the hashtag with the rest of the tweet is a clear indication of sarcasm. Vaele and Hao (2010) identify sarcasm in similes using Google searches to determine how likely a simile is. Riloff et al. (2013) look for a positive verb and a negative situation phrase in a sentence to detect sarcasm.", |
| "cite_spans": [ |
| { |
| "start": 314, |
| "end": 334, |
| "text": "Vaele and Hao (2010)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 422, |
| "end": 442, |
| "text": "Riloff et al. (2013)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literature Review", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In statistical sarcasm detection, we use features related to the text to be classified. Most approaches use bag-of-words as features (Joshi et al., 2017) . Some other features used in other papers include sarcastic patterns and punctuations (Tsur et al., 2010) , user mentions, emoticons, unigrams, sentiment-lexicon-based features (Gonz\u00e1lez-Ib\u00e1\u00f1ez et al., 2011), ambiguity-based, semantic relatedness (Reyes et al., 2012) , N-grams, emotion marks, intensifiers (Liebrecht et al., 2013) , unigrams (Joshi et al., 2015) , bigrams (Liebrecht et al., 2013) , word shape, pointedness (Pt\u00e1\u010dek et al., 2014) , etc. Most work in statistical sarcasm detection relies on different forms of Support Vector Machines(SVMs) (Kreuz and Caucci, 2007) . (Reyes et al., 2012) uses Naive Bayes and Decision Trees for multiple pairs of labels among irony, humor, politics and education. For conversational data,sequence labeling algorithms perform better than classification algorithms (Joshi et al., 2016) . They use SVM-HMM and SEARN as the sequence labeling algorithms (Joshi et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 133, |
| "end": 153, |
| "text": "(Joshi et al., 2017)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 241, |
| "end": 260, |
| "text": "(Tsur et al., 2010)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 402, |
| "end": 422, |
| "text": "(Reyes et al., 2012)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 462, |
| "end": 486, |
| "text": "(Liebrecht et al., 2013)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 498, |
| "end": 518, |
| "text": "(Joshi et al., 2015)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 529, |
| "end": 553, |
| "text": "(Liebrecht et al., 2013)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 580, |
| "end": 601, |
| "text": "(Pt\u00e1\u010dek et al., 2014)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 711, |
| "end": 735, |
| "text": "(Kreuz and Caucci, 2007)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 738, |
| "end": 758, |
| "text": "(Reyes et al., 2012)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 967, |
| "end": 987, |
| "text": "(Joshi et al., 2016)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1053, |
| "end": 1073, |
| "text": "(Joshi et al., 2016)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literature Review", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For a long time, NLP was mainly based on statistical analysis, but machine learning algorithms have now taken over this domain of research providing unbeaten results. Dr. Pushpak Bhattacharyya, a well-known researcher in this field, refers to this as \"NLP-ML marriage\". Some approaches use similarity between word embeddings as features for sarcasm detection. They augment these word embedding-based features with features from their prior works. The inclusion of past features is key because they observe that using the new features alone does not suffice for an excellent performance. Some of the approaches show a considerable boost in results while using deep learning algorithms over the standard classifiers. Ghosh and Veale (2016) use a combination of CNN, RNN, and a deep neural network. Another approach uses a combination of deep learning and classifiers. It uses deep learning(CNN) to extract features and the extracted features are fed into the SVM classifier to detect sarcasm.", |
| "cite_spans": [ |
| { |
| "start": 715, |
| "end": 737, |
| "text": "Ghosh and Veale (2016)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Literature Review", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We used the twitter dataset provided by the hosts of shared task on Sarcasm Detection. Initial analysis reveals that this is a perfectly balanced dataset having 5000 entries. There are an equal number of sarcastic and non-sarcastic entries in it. It includes the fields label, response and context. The label specifies whether the entry is sarcastic or non-sarcastic, response is the statement over which sarcasm needs to be detected and context is a list of statements which specify the context of the particular response. The test dataset has 1800 entries with fields ID(an identifier), context and response.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Most of the time, raw data is not complete and it cannot be sent for processing(applying models). Here, preprocessing the dataset makes it suitable to apply analysis on. This is an extremely important phase as the final results are completely dependent on the quality of the data supplied to the model. However great the implementation or design of the model is, the dataset is going to be the distinguishing factor between obtaining excellent results or not. Steps followed during the preprocessing phase are: (Mayo) \u2022 Check out for null values -Presence of null values in the dataset leads to inaccurate predictions. There are two approaches to handle this:", |
| "cite_spans": [ |
| { |
| "start": 511, |
| "end": 517, |
| "text": "(Mayo)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "-Delete that particular row -We will be using this method to handle null values. -Replace the null value with the mean, mode or median value of that column -This approach cannot be used as our dataset contains only text.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Tokenization and remove punctuation -Process of splitting the sentences into words and also remove the punctuation in the sentences as they are of no importance for the given specific task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Case conversion -Converting the case of the dataset to lowercase unless the case of the whole word is in uppercase.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Stopword removal -These words are a set of commonly used words in a language. Some common English stop words are \"a\", \"the\", \"is\", \"are\" and etc. The main idea behind this procedure is to remove low value information so as to focus on the important information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Normalization -This is the process of transforming the text into its standard form. For example, the word \"gooood\" and \"gud\" can be transformed to \"good\", \"b4\" can be transformed to \"before\", \":)\" can be transformed to \"smile\" its canonical form.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Noise removal -Removal of all pieces of text that might interfere with our text analysis phase. This is a highly domain dependent task. For the twitter dataset noise can be all special characters except hashtag.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 Stemming -This is the process of converting the words to their root form for easy processing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Both training and test data are preprocessed with the above methods. Once the above preprocessing steps have been applied we are ready to move to the model development.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this section, we describe the methods we used to build the model for the sarcasm detection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Feature extraction is an extremely important factor along with pre-processing in the model building process. The field of natural language processing(NLP), sentence and word embeddings are majorly used to represent the features of the language. Word embedding is the collective name for a set of feature learning techniques in natural language processing where words or phrases from the vocabulary are mapped to vectors of real numbers. In our research, we used two types of embeddings for the feature extraction phase. One being BERT(Bidirectional Encoder Representations from Transformers) word embeddings (Devlin et al., 2018) and the other being GloVe(Global Vectors) embeddings (Pennington et al., 2014) .", |
| "cite_spans": [ |
| { |
| "start": 608, |
| "end": 629, |
| "text": "(Devlin et al., 2018)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 683, |
| "end": 708, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Extraction", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "'Bert-as-service' (Xiao, 2018 ) is a useful tool for the generation of the word embeddings. Each word is represented as a vector of size 768. The embeddings given by BERT are contextual. Every sentence is represented as a list of word embeddings. The given training and test data has response and context as two fields. Embeddings for both context and response were generated. Then, the embeddings were combined in such a way that context comes before response. The intuition to this being that it is the context that elicits a response from a user. Once the embeddings are extracted, the sequence of the embeddings were padded to get them to the same size.", |
| "cite_spans": [ |
| { |
| "start": 18, |
| "end": 29, |
| "text": "(Xiao, 2018", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BERT embeddings", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "The results given by BERT not being up to the mark led us to search for a twitter specific embedding and thus we chose GloVe embeddings specifically trained for twitter. It uses unsupervised learning for obtaining vector representation of words. The embeddings given by GloVe are non-contextual. Here we decided to choose GloVe twitter sentence embeddings for training the models as it would capture the overall meaning of the sentence in a relatively lesser amount of memory. This generated a list of size 200 for each input provided. Once the sentence embeddings were extracted, the context and the response were combined such that context comes before response. Context embeddings were generated independent of the response so that the sentiment of response would not effect the sentiment of the context.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GloVe embeddings", |
| "sec_num": "4.1.2" |
| }, |
| { |
| "text": "After extraction of the word embeddings, the next step is to train these to build a model which can be used to predict the class of test samples. Classifiers like Linear Support Vector Classifier(LSVC), Lo-gistic Regression(LR), Gaussian Naive Bayes and Random Forest were used. Scikit-learn (Pedregosa et al., 2011) was used for training these models. Word embeddings were obtained for the test dataset in the same way mentioned before. Now, they are ready for predictions.", |
| "cite_spans": [ |
| { |
| "start": 292, |
| "end": 316, |
| "text": "(Pedregosa et al., 2011)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and Predictions", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Google colab with 25GB RAM was used for the experiment, which includes extraction of embedding, training the models and prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We use the bert as a service for generating the BERT embeddings. We spin up BERT as a service server and create a client to get the embeddings. We use the uncased L-12 H-768 A-12 pretrained BERT model to generate the embeddings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extracting BERT embeddings", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "All of the context(i.e 100%) provided in the dataset was used for this study. Word embeddings for the response and context are generated separately. Embeddings for each word in the response is extracted separately and appended to form a list. Every sentence in the context is appended one after the other. The same is done for response embeddings. The embeddings of the context and that of response fields are concatenated to get the final embedding.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extracting BERT embeddings", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We used genism to download glove-twitter-200 pretrained model. Embeddings for the response and context are extracted separately. Sentences in the given context are appended to form a single sentence. Later we generate the sentence embeddings for response and context separately. The context embeddings and response embeddings are concatenated to generate the final embedding.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extracting the GLOVE embeddings", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We use the Scikit-learn machine learning library to train the classifiers(SVM, Logistic Regression, Gaussian Naive Bayes and Random Forest). Trained models are saved for later prediction. Using the saved models, we predict the test samples as SARCASM or NOT SARCASM.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training the model", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "The result was measured using the metric Fmeasure. F-measure is a measure of a test's ac- curacy and is calculated as the weighted harmonic mean of the precision and recall of the test (Zhang and Zhang, 2009) . Now, we discuss the results obtained with BERT and GloVe separately.", |
| "cite_spans": [ |
| { |
| "start": 185, |
| "end": 208, |
| "text": "(Zhang and Zhang, 2009)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Among the classifiers mentioned in the previous section, a good result was received from SVM and logistic regression, with the latter giving the best results. Table 1 shows the results of training the classifiers only on the response and excluding the context. Table 2 shows the results obtained with BERT including the context. It is clear that taking the context into consideration boosts the result.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 159, |
| "end": 166, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 261, |
| "end": 268, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "With BERT", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "The results for this approach gave much better results when compared to the BERT embedding approach. Also, GloVe was much faster than BERT. Among the two classifiers logistic regression gave the better results of the two. Table 3 shows the results obtained with GloVe.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 222, |
| "end": 229, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "With GloVe", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Sarcasm detection can be done effectively using word embeddings. They are extremely useful as they capture the meaning of a word in a vector representation. Even though BERT gives contextual Classifier F-measure Linear Support Vector Classifier 0.679 Logistic Regression 0.690 Table 3 : Results with GloVe embeddings including both context and response word representations i.e the same word occuring multiple times in a sentence may have different vectors, it didn't perform to the mark when compared to GloVe which gives the same vector for a word occuring multiple times. However, this cannot be generalized. It may depend on the dataset. Among the classifiers, logistic regression always outperformed the other classifiers used in this study.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 277, |
| "end": 284, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank Dr. Anand Kumar M for his immense support, patient guidance and enthusiastic encouragement throughout the research work. We would also like to thank the reviewers for their useful suggestions to improvise the paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Sarcasm, pretense, and the semantics/pragmatics distinction", |
| "authors": [ |
| { |
| "first": "Elisabeth", |
| "middle": [], |
| "last": "Camp", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "No\u00fbs", |
| "volume": "46", |
| "issue": "4", |
| "pages": "587--634", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elisabeth Camp. 2012. Sarcasm, pretense, and the semantics/pragmatics distinction. No\u00fbs, 46(4):587- 634.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1810.04805" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Fracking sarcasm using neural network", |
| "authors": [ |
| { |
| "first": "Aniruddha", |
| "middle": [], |
| "last": "Ghosh", |
| "suffix": "" |
| }, |
| { |
| "first": "Tony", |
| "middle": [], |
| "last": "Veale", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 7th workshop on computational approaches to subjectivity, sentiment and social media analysis", |
| "volume": "", |
| "issue": "", |
| "pages": "161--169", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aniruddha Ghosh and Tony Veale. 2016. Fracking sar- casm using neural network. In Proceedings of the 7th workshop on computational approaches to sub- jectivity, sentiment and social media analysis, pages 161-169.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The Poetics of Mind: Figurative Thought, Language, and Understanding. The Poetics of Mind: Figurative Thought, Language, and Understanding", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [ |
| "W" |
| ], |
| "last": "Gibbs", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "W" |
| ], |
| "last": "Gibbs ;", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Gibbs", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R.W. Gibbs, R.W. Gibbs, Cambridge University Press, and J. Gibbs. 1994. The Poetics of Mind: Figurative Thought, Language, and Understanding. The Poet- ics of Mind: Figurative Thought, Language, and Un- derstanding. Cambridge University Press.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Identifying sarcasm in twitter: A closer look", |
| "authors": [ |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Gonz\u00e1lez-Ib\u00e1\u00f1ez", |
| "suffix": "" |
| }, |
| { |
| "first": "Smaranda", |
| "middle": [], |
| "last": "Muresan", |
| "suffix": "" |
| }, |
| { |
| "first": "Nina", |
| "middle": [], |
| "last": "Wacholder", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers", |
| "volume": "2", |
| "issue": "", |
| "pages": "581--586", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roberto Gonz\u00e1lez-Ib\u00e1\u00f1ez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in twit- ter: A closer look. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers -Volume 2, HLT '11, page 581-586, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Automatic sarcasm detection: A survey", |
| "authors": [ |
| { |
| "first": "Aditya", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Pushpak", |
| "middle": [], |
| "last": "Bhattacharyya", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [ |
| "J" |
| ], |
| "last": "Car", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "ACM Computing Surveys (CSUR)", |
| "volume": "50", |
| "issue": "5", |
| "pages": "1--22", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aditya Joshi, Pushpak Bhattacharyya, and Mark J Car- man. 2017. Automatic sarcasm detection: A survey. ACM Computing Surveys (CSUR), 50(5):1-22.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Harnessing context incongruity for sarcasm detection", |
| "authors": [ |
| { |
| "first": "Aditya", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Vinita", |
| "middle": [], |
| "last": "Sharma", |
| "suffix": "" |
| }, |
| { |
| "first": "Pushpak", |
| "middle": [], |
| "last": "Bhattacharyya", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "2", |
| "issue": "", |
| "pages": "757--762", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/P15-2124" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aditya Joshi, Vinita Sharma, and Pushpak Bhat- tacharyya. 2015. Harnessing context incongruity for sarcasm detection. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 2: Short Papers), pages 757-762, Beijing, China. As- sociation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Harnessing sequence labeling for sarcasm detection in dialogue from tv series 'friends", |
| "authors": [ |
| { |
| "first": "Aditya", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Vaibhav", |
| "middle": [], |
| "last": "Tripathi", |
| "suffix": "" |
| }, |
| { |
| "first": "Pushpak", |
| "middle": [], |
| "last": "Bhattacharyya", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [ |
| "James" |
| ], |
| "last": "Carman", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aditya Joshi, Vaibhav Tripathi, Pushpak Bhat- tacharyya, and Mark James Carman. 2016. Harness- ing sequence labeling for sarcasm detection in dia- logue from tv series 'friends'. In CoNLL.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Lexical influences on the perception of sarcasm", |
| "authors": [ |
| { |
| "first": "Roger", |
| "middle": [], |
| "last": "Kreuz", |
| "suffix": "" |
| }, |
| { |
| "first": "Gina", |
| "middle": [], |
| "last": "Caucci", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the Workshop on Computational Approaches to Figurative Language", |
| "volume": "", |
| "issue": "", |
| "pages": "1--4", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roger Kreuz and Gina Caucci. 2007. Lexical influ- ences on the perception of sarcasm. In Proceed- ings of the Workshop on Computational Approaches to Figurative Language, pages 1-4, Rochester, New York. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "The perfect solution for detecting sarcasm in tweets #not", |
| "authors": [ |
| { |
| "first": "Christine", |
| "middle": [], |
| "last": "Liebrecht", |
| "suffix": "" |
| }, |
| { |
| "first": "Florian", |
| "middle": [], |
| "last": "Kunneman", |
| "suffix": "" |
| }, |
| { |
| "first": "Antal", |
| "middle": [], |
| "last": "Van Den", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bosch", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 4th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", |
| "volume": "", |
| "issue": "", |
| "pages": "29--37", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christine Liebrecht, Florian Kunneman, and Antal van den Bosch. 2013. The perfect solution for de- tecting sarcasm in tweets #not. In Proceedings of the 4th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analy- sis, pages 29-37, Atlanta, Georgia. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Who cares about sarcastic tweets? investigating the impact of sarcasm on sentiment analysis", |
| "authors": [ |
| { |
| "first": "Diana", |
| "middle": [], |
| "last": "Maynard", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Greenwood", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014)", |
| "volume": "", |
| "issue": "", |
| "pages": "4238--4243", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diana Maynard and Mark Greenwood. 2014. Who cares about sarcastic tweets? investigating the im- pact of sarcasm on sentiment analysis. In Pro- ceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014), pages 4238-4243, Reykjavik, Iceland. European Languages Resources Association (ELRA).", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A general approach to preprocessing text data", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Mayo", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew Mayo. A general approach to preprocessing text data. https: //www.kdnuggets.com/2017/12/ general-approach-preprocessing-text-data. html.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Scikit-learn: Machine learning in Python", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pedregosa", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Varoquaux", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Gramfort", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Michel", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Thirion", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Grisel", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Blondel", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Prettenhofer", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Dubourg", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Vanderplas", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Passos", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Cournapeau", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Brucher", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Perrot", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Duchesnay", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2825--2830", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Sarcasm detection on Czech and English twitter", |
| "authors": [ |
| { |
| "first": "Tom\u00e1\u0161", |
| "middle": [], |
| "last": "Pt\u00e1\u010dek", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Habernal", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Hong", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "213--223", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tom\u00e1\u0161 Pt\u00e1\u010dek, Ivan Habernal, and Jun Hong. 2014. Sarcasm detection on Czech and English twitter. In Proceedings of COLING 2014, the 25th Inter- national Conference on Computational Linguistics: Technical Papers, pages 213-223, Dublin, Ireland. Dublin City University and Association for Compu- tational Linguistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "From humor recognition to irony detection: The figurative language of social media", |
| "authors": [ |
| { |
| "first": "Antonio", |
| "middle": [], |
| "last": "Reyes", |
| "suffix": "" |
| }, |
| { |
| "first": "Paolo", |
| "middle": [], |
| "last": "Rosso", |
| "suffix": "" |
| }, |
| { |
| "first": "Davide", |
| "middle": [], |
| "last": "Buscaldi", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Data Knowledge Engineering", |
| "volume": "74", |
| "issue": "", |
| "pages": "1--12", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.datak.2012.02.005" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Antonio Reyes, Paolo Rosso, and Davide Buscaldi. 2012. From humor recognition to irony detection: The figurative language of social media. Data Knowledge Engineering, 74:1-12.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Sarcasm as contrast between a positive sentiment and negative situation", |
| "authors": [ |
| { |
| "first": "Ellen", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashequl", |
| "middle": [], |
| "last": "Qadir", |
| "suffix": "" |
| }, |
| { |
| "first": "Prafulla", |
| "middle": [], |
| "last": "Surve", |
| "suffix": "" |
| }, |
| { |
| "first": "Lalindra", |
| "middle": [], |
| "last": "De", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Silva", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruihong", |
| "middle": [], |
| "last": "Gilbert", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "704--714", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ellen Riloff, Ashequl Qadir, Prafulla Surve, De Lalin- dra Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. EMNLP, pages 704-714.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Icwsm -a great catchy name: Semi-supervised recognition of sarcastic sentences in online product reviews", |
| "authors": [ |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Tsur", |
| "suffix": "" |
| }, |
| { |
| "first": "Dmitry", |
| "middle": [], |
| "last": "Davidov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ari", |
| "middle": [], |
| "last": "Rappoport", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oren Tsur, Dmitry Davidov, and Ari Rappoport. 2010. Icwsm -a great catchy name: Semi-supervised recognition of sarcastic sentences in online product reviews.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Detecting ironic intent in creative comparisons", |
| "authors": [ |
| { |
| "first": "Tony", |
| "middle": [], |
| "last": "Vaele", |
| "suffix": "" |
| }, |
| { |
| "first": "Yanfen", |
| "middle": [], |
| "last": "Hao", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "European Conference on Artificial Intelligence", |
| "volume": "215", |
| "issue": "", |
| "pages": "765--770", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tony Vaele and Yanfen Hao. 2010. Detecting ironic intent in creative comparisons. In European Con- ference on Artificial Intelligence, volume 215, pages 765-770.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "bert-as-service", |
| "authors": [ |
| { |
| "first": "Han", |
| "middle": [], |
| "last": "Xiao", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Han Xiao. 2018. bert-as-service. https://github. com/hanxiao/bert-as-service.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF1": { |
| "text": "Results with BERT embeddings including both context and response", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |