ACL-OCL / Base_JSON /prefixF /json /figlang /2020.figlang-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:42:45.076372Z"
},
"title": "",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Sarcasm analysis in user conversion text is automatic detection of any irony, insult, hurting, painful, caustic, humour, vulgarity that degrades an individual. It is helpful in the field of sentimental analysis and cyberbullying. As an immense growth of social media, sarcasm analysis helps to avoid insult, hurts and humour to affect someone. In this paper, we present traditional Machine learning approaches, Deep learning approach (RNN-LSTM) and BERT (Bidirectional Encoder Representations from Transformers) for identifying sarcasm. We have used the approaches to build the model, to identify and categorize how much conversion context or response is needed for sarcasm detection and evaluated on the two social media forums that is Twitter conversation dataset and Reddit conversion dataset. We compare the performance based on the approaches and obtained the best F1 scores as 0.722, 0.679 for the Twitter forums and Reddit forums respectively.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Sarcasm analysis in user conversion text is automatic detection of any irony, insult, hurting, painful, caustic, humour, vulgarity that degrades an individual. It is helpful in the field of sentimental analysis and cyberbullying. As an immense growth of social media, sarcasm analysis helps to avoid insult, hurts and humour to affect someone. In this paper, we present traditional Machine learning approaches, Deep learning approach (RNN-LSTM) and BERT (Bidirectional Encoder Representations from Transformers) for identifying sarcasm. We have used the approaches to build the model, to identify and categorize how much conversion context or response is needed for sarcasm detection and evaluated on the two social media forums that is Twitter conversation dataset and Reddit conversion dataset. We compare the performance based on the approaches and obtained the best F1 scores as 0.722, 0.679 for the Twitter forums and Reddit forums respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Social media have shown a rapid growth of user counts and have been object of scientific and sentiment analysis as in (Kalaivani A and Thenmozhi D, 2018) . Sarcasm occurs frequently in user-generated content such as blogs, forums and micro posts, especially in English, and is inherently difficult to analyze, not only for a machine but even for a human. Sarcasm Analysis is useful for several applications such as sentimental analysis, opinion mining, hate speech identification, offensive and abusive language detection, advertising and cyber bullying.",
"cite_spans": [
{
"start": 118,
"end": 153,
"text": "(Kalaivani A and Thenmozhi D, 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(Debanjan Ghosh et al., 2018) performed to identify how much context is needed to find the conversion context is sarcastic or not and analysed the verbal irony tweets using LSTM with more different attention mechanism and still facing the problem with the usage of slangs, rhetorical questions, usage of numbers and usage of non-vocabulary tweets. In recent years, several research works are performed in sarcasm detection in the Natural Language Processing community (Aditya Joshi at el., 2017) .",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "Ghosh et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 476,
"end": 495,
"text": "Joshi at el., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Figurative Language 2020 Task 2: shared task on sarcasm detection in social media forums. It focuses to identify the given conversion text is sarcastic or not and find how much context is helpful for sarcasm identification have modelled either the given instance may be isolated or combined. It focuses on two social media forums that are Twitter conversion dataset and Reddit conversion dataset (Khodak et al., 2017) . For both the datasets, the organizer provides the context and response that is the response is reply to the context and the context is a full dialogue conversation thread. The computational task is to detect and identify the sarcasm and to understand how much conversation context is needed or helpful for sarcasm detection.",
"cite_spans": [
{
"start": 399,
"end": 420,
"text": "(Khodak et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The challenges of this shared task include: a) small dataset is hard to train the complex models; b) the characteristics of the language on social media forums difficulties such as non-vocabulary words and ungrammatical context c) how much conversion text to detect sarcasm and the usage of slangs, rhetorical questions, Capitalized words, numbers, Abbreviations, pro-longed words, hashtags, URL, Repetitions of Punctuations, Contractions, Continuous words without spaces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We address the problem in hash tags, continuation of words without spaces, URL and to classify which context is helpful to find sarcasm. To address the problem, we pre-processed the text by using Machine learning libraries like NTLK, Gensim and classified by using different traditional machine learning techniques, deep learning technique and finally we obtained the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Context using BERT ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sarcasm Identification and Detection in Conversion",
"sec_num": null
},
{
"text": "(Aniruddha Ghosh and Tony Veale, 2016) used neural network semantic model to capture the temporal text patterns for shorter texts. As an example, in this model classified \"I Just Love Mondays!\" correctly as sarcasm, but it failed to classify \"Thank God It's Monday!\" as sarcasm, even though both are similar at the conceptual level. (Keith Cortis et al., 2017) performed in the SemEval-2017 shared task to detect the sentiment, humour and to predict the sentiment score of companies' stocks in the smaller texts. (Raj Kumar Gupta and Yinping Yang, 2017) performed in the shared task of SemEval-2017 Task 4 to detect sarcasm by used the SVM Based classifier and developed the CrystalNest to analyse the features combining sarcasm score derived, sentiment scores, NRC lexicon, n-grams, word embedding vectors, and part-of-speech features.",
"cite_spans": [
{
"start": 11,
"end": 38,
"text": "Ghosh and Tony Veale, 2016)",
"ref_id": "BIBREF3"
},
{
"start": 340,
"end": 360,
"text": "Cortis et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "(David Bamman and Noah A. Smith, 2015) used the predictive features and analysed the utterance on Twitter based on the properties of author, audience and environment features. (Mondher Bouazizi and Tomoaki Otsuki, 2016) used the pattern-based approach to detect sarcasm and analysed the four features such as sentimentrelated features, punctuation-related features, syntactic and semantic features, pattern-related features and classification done by the classifiers such as Random Forest, Support Vector Machine, k Near-est Neighbours and Maximum Entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "(Meishan Zhang et al., 2016) used the bidirectional gated recurrent neural network and discrete model to detect sarcasm and analyse the local and conceptual information and perform the process in Glove word embedding. (Malave N et al., 2020) used the context-based evaluation based on the data and to determine the user behaviour and context information to detect sarcasm. (Yitao Cai et al., 2019) used the multi-modal hierarchical fusion model to detect the multi-modal sarcasm for tweets consisting of texts and images in Twitter.",
"cite_spans": [
{
"start": 9,
"end": 28,
"text": "Zhang et al., 2016)",
"ref_id": "BIBREF9"
},
{
"start": 218,
"end": 241,
"text": "(Malave N et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 373,
"end": 397,
"text": "(Yitao Cai et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In our approach, we have used Twitter and Reddit dataset given by Figurative Language processing 2020 shared task on sarcasm detection. The dataset is given with columns namely, label, context and response where the response is the reply of context and the context is the full conversion dialogue and it is separated as C1, C2, C3 etc. C2 is the reply of the C1 context and C3 is the reply of C2 context respectively. Both the datasets consists of the labels namely SARCASM and NOT_SARCASM. In the Twitter dataset, the train data has 5000 conversion tweets in that 2500 sarcasm tweets and 2500 not sarcasm tweets and the test data has 1800 tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Methodology",
"sec_num": "3"
},
{
"text": "In the Reddit dataset, the train data has 4400 conversion tweets in that 2200 sarcasm tweets and 2200 non sarcasm tweets and the test data have 1800 tweets. we have the pre-processed the text to removal of @USER, URL and the pro longed words like \"ohhhhhh\" and replace the words like F * * king as Fucking, replace the question tags like Didn't as Did not, removal of hashtags and separate the words into the continuous space less sentence. Tweet tokenizer is used to tokenize the word and to get the vocabulary words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Methodology",
"sec_num": "3"
},
{
"text": "We have employed the traditional machine learning techniques, Recurrent Neural Network with LSTM (RNN-LSTM) and BERT. In the machine learning approach, first, we have used the utterance of combined context and response (CR) for detecting the sarcasm and then preprocessed data using Gensim libraries to remove the hashtags, punctuation, white spaces, numeric content, stop words and then convert into lower text. We have used the word cloud to identify and categorize the most sarcastic words and nonsarcastic words which are appeared in sarcasm message and not sarcasm message as shown below in Figure 1 and Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 596,
"end": 604,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 609,
"end": 617,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Data and Methodology",
"sec_num": "3"
},
{
"text": "We have performed Doc2Vec transformer and Tfidf Vectorizer for feature extraction and classified by using the Logistic Regression (LR), Random Forest Classifier (RF), XGBoost Classifier (XGB), Linear Support vector machine (SVC), Gaussian Na\u00efve Binomial (NB). By using Tfidf Vectorizer, we got the 28761 features for 5000 tweets. Table 1 presents the cross validation accuracies of the different machine learning classifiers in the Twitter data as mentioned above. Table 2 presents the cross validation accuracies of the models based on the feature extraction in the Reddit data.",
"cite_spans": [],
"ref_spans": [
{
"start": 330,
"end": 337,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 465,
"end": 472,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data and Methodology",
"sec_num": "3"
},
{
"text": "In Twitter data, we have chosen the scores which are above 0.70 from the cross validation accuracies of the machine learning techniques. Based on the cross validation scores, we have obtain the best accuracies score in SVM, logistic regression and NB classifiers of the combined context text (CR) in Tfidf vectorizer and the best accuracies score in Logistic regression and Gaussian NB models of the isolated response (R) text in Tfidf vectorizer. In Reddit data, we have chosen the scores which are above 0.55 from the cross validation accuracies of the machine learning techniques. Based on the cross validation scores, we have obtain the best accuracies score in logistic regression and XGBoost Classifier of the combined text (CR) in Tfidf vectorizer and the best accuracies score in Logistic regression and Gaussian NB models of the isolated response text (R) in Tfidf vectorizer. In both the dataset, the result shows Doc2Vec transformer is not performed well because of non-grammatical sentences and Tfidf Vectorizer performs well when compared with the Doc2Vec transformer in dialogue conversion thread.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Methodology",
"sec_num": "3"
},
{
"text": "In the RNN-LSTM Method, we have used the combined context text with response to perform the pre-process using NLTK libraries, tokenize the word by using the word tokenizer and lemmatize the word after that to remove the stop words. Finally, we have obtained the train data has 325382 words total, with a vocabulary size of 32756, max sentence length is 568 and the test data has 30782 words total, with a vocabulary size of 8824, Max sentence length is 467. We used the Word2Vec embedding model for the embedding the words and obtain the 32668 unique tokens. We have evaluated using the RNN-LSTM and trained the deep learning models with a batch size 128 and dropout 0.2 for 5 epochs to build the model. We got the accuracy is 0.4890 which is low when compared with the machine learning approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Methodology",
"sec_num": "3"
},
{
"text": "In the BERT model, Google research team releases BERT (Devlin et al., 2018) and achieve good performance on many NLP tasks. We have used the combined context text, isolated context, and isolated response to perform the model. We have used the Bert uncased model for training the model, batch size is 32, learning rate is 2e-5, and number of train epochs is 3.0. Warmup is a period of time where the learning rate is small and gradually increases usually helps training. Warmup proportion is 0.1 and the model configuration is checkpoints is 300, summary steps is 100. We got the accuracy is 0.77 score. We have compared over all cross validation accuracies scores, BERT performs good than the machine learning approaches and deep learning technique. ",
"cite_spans": [
{
"start": 54,
"end": 75,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Methodology",
"sec_num": "3"
},
{
"text": "We have evaluated the test data of Twitter and Reddit dataset which is shared by Figurative Language processing 2020 shared task organizers. The performance is evaluated by using the metrics as precision, recall and F1 score. We have chosen the classifiers to predict the test data based on the performance of the cross validation of training data. We have performed to predict the test data by using various combinations of Conversion context and response that are CR represents the combined context of sentences with response, C represents the combined full context of sentences without response, PCRW represents the processed combined context of meaningful words and response, PCW represents the combined full context of meaningful words without response, PC1RW represents the processed isolated first context of meaningful words and response, PC1W represents the isolated first context of meaningful words without response, R represents the response, PC1R represents the processed second context with response, PR represents the processed response. The results of the approaches are presented in the Table 3 shows the response text from conversion dialogue by using BERT have higher performance than others for the shared task of the Twitter dataset and the Table 4 shows BERT response text from conversion dialogue thread performs well for the shared task of the Reddit dataset. The best results have obtained by using BERT model with the isolated response(R) text for both the Twitter and Reddit dataset respectively. We have noticed that the BERT performs well in continuous conversion dialogues or continuous sentences with previous dialogues compared with the meaningful words from conversion context. In both the dataset, the RNN-LSTM performs poor than the SVM, NB and LR because of the smaller dataset. The machine learning approach performs better with the smaller dataset. But the BERT model performs well for the response text of both the Twitter and Reddit dataset with the non-grammatical sentences even the data size is small. Figure 3 shows the chart representations of the performance analysis of the different methods in the Twitter data. Figure 4 shows the chart representations of the performance analysis of the different methods in the Reddit data.",
"cite_spans": [],
"ref_spans": [
{
"start": 1104,
"end": 1111,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 1262,
"end": 1269,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 2045,
"end": 2053,
"text": "Figure 3",
"ref_id": null
},
{
"start": 2160,
"end": 2168,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We have implemented traditional machine learning, deep learning approach and BERT model for identifying the sarcasm from Conversion dialogue thread and to detecting sarcasm from social media. The approaches are evaluated on Figurative Language 2020 dataset. The given utterance of combined text and isolated text are preprocessed and vectorized using word embeddings in deep learning models. We have employed RNN-LSTM to build the model for both the datasets. The instances are vectorized using Doc2Vec and TFIDF score for traditional machine learning models. The classifiers namely Logistic Regression (LR), Random Forest Classifier (RF), XGBoost Classifier (XGB), Linear Support vector machine (SVC), Gaussian Na\u00efve Binomial (NB) were employed to build the models for both the Twitter and Reddit datasets. BERT uncased model with isolated response context gives better results for both the datasets respectively. The performance may be improved further by using larger datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic sarcasm detection: A survey",
"authors": [
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "M",
"middle": [
"J"
],
"last": "Carman",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "50",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshi, A., Bhattacharyya, P., and Carman, M. J. 2017. Automatic sarcasm detection: A survey. ACM Computing Surveys (CSUR), 50(5), 73.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Sarcasm analysis using conversation context",
"authors": [
{
"first": "D",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "A",
"middle": [
"R"
],
"last": "Fabbri",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Muresan",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Linguistics",
"volume": "44",
"issue": "4",
"pages": "755--792",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ghosh, D., Fabbri, A. R., and Muresan, S. 2018. Sarcasm analysis using conversation context. Computational Linguistics, 44(4), 755-792.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A large self-annotated corpus for sarcasm",
"authors": [
{
"first": "M",
"middle": [],
"last": "Khodak",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Saunshi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Vodrahalli",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.05579"
]
},
"num": null,
"urls": [],
"raw_text": "Khodak, M., Saunshi, N., and Vodrahalli, K. 2017. A large self-annotated corpus for sarcasm. arXiv preprint arXiv:1704.05579.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Fracking Sarcasm using Neural Network",
"authors": [
{
"first": "Aniruddha",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Veale",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.13140/RG.2.2.16560.15363"
]
},
"num": null,
"urls": [],
"raw_text": "Aniruddha Ghosh, and Tony Veale. 2016. Fracking Sarcasm using Neural Network\", research gate publication, Conference Paper. DOI: 10.13140/RG.2.2.16560.15363.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "SemEval-2017 Task 5: Fine-Grained Sentiment Analysis on Financial Microblogs and News",
"authors": [
{
"first": "Keith",
"middle": [],
"last": "Cortis",
"suffix": ""
},
{
"first": "Andre",
"middle": [],
"last": "Freitas",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Daudert",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Hurlimann",
"suffix": ""
},
{
"first": "Manel",
"middle": [],
"last": "Zarrouk",
"suffix": ""
},
{
"first": "Siegfried",
"middle": [],
"last": "Handschuh",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Davis",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "519--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keith Cortis, Andre Freitas, Tobias Daudert, Manuela Hurlimann, Manel Zarrouk, Siegfried Handschuh, and Brian Davis. 2017. SemEval-2017 Task 5: Fine-Grained Sentiment Analysis on Financial Microblogs and News\", Proceedings of the 11th International Workshop on Semantic Evaluations , pages 519-535, Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "CrystalNest at SemEval-2017 Task 4: Using Sarcasm Detection for Enhancing Sentiment Classification and Quantification",
"authors": [
{
"first": "Raj",
"middle": [],
"last": "Kumar Gupta",
"suffix": ""
},
{
"first": "Yinping",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raj Kumar Gupta, and Yinping Yang. 2017. CrystalNest at SemEval-2017 Task 4: Using Sarcasm Detection for Enhancing Sentiment Classification and Quantification, ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Contextualized Sarcasm Detection on Twitter, Association for the Advancement of Artificial Intelligence (www.aaai.org)",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bamman and Noah A. Smith. 2016. Contextualized Sarcasm Detection on Twitter, Association for the Advancement of Artificial Intelligence (www.aaai.org).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A Pattern-Based Approach for Sarcasm Detection on Twitter, IEEE. Translations and content mining",
"authors": [
{
"first": "Mondher Bouazizi And Tomoaki",
"middle": [],
"last": "Otsuki",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ACCESS.2016.2594194"
]
},
"num": null,
"urls": [],
"raw_text": "Mondher Bouazizi And Tomoaki Otsuki (Ohtsuki),. 2016. A Pattern-Based Approach for Sarcasm Detection on Twitter, IEEE. Translations and content mining, Digital Object Identifier 10.1109/ACCESS.2016.2594194",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sentimental Analysis using Deep Learning Techniques",
"authors": [
{
"first": "Kalaivani",
"middle": [],
"last": "Thenmozhi",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "International journal of recent technology and engineering",
"volume": "",
"issue": "",
"pages": "2277--3878",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kalaivani A and Thenmozhi D. 2019. Sentimental Analysis using Deep Learning Techniques, International journal of recent technology and engineering, ISSN: 2277-3878.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Tweet Sarcasm Detection Using Deep Neural Network",
"authors": [
{
"first": "Meishan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guohong",
"middle": [],
"last": "Fu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2449--2460",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meishan Zhang, Yue Zhang, and Guohong Fu,. 2016. Tweet Sarcasm Detection Using Deep Neural Network, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2449-2460.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sarcasm Detection on Twitter: User Behavior Approach",
"authors": [
{
"first": "N",
"middle": [],
"last": "Malave",
"suffix": ""
},
{
"first": "S",
"middle": [
"N"
],
"last": "Dhage",
"suffix": ""
}
],
"year": 2020,
"venue": "Intelligent Systems, Technologies and Applications. Advances in Intelligent Systems and Computing",
"volume": "910",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/978-981-13-6095-4_5"
]
},
"num": null,
"urls": [],
"raw_text": "Malave N., and Dhage S.N. 2020. Sarcasm Detection on Twitter: User Behavior Approach. In: Thampi S. et al. (eds) Intelligent Systems, Technologies and Applications. Advances in Intelligent Systems and Computing, vol 910. Springer, Singapore. DOI https://doi.org/10.1007/978-981-13-6095-4_5.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multi-Modal Sarcasm Detection in Twitter with Hierarchical Fusion Model",
"authors": [],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2506--2515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yitao Cai, Huiyu Cai and Xiaojun Wan. 2019. Multi- Modal Sarcasm Detection in Twitter with Hierarchical Fusion Model, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2506-2515 Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Sarcastic words"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Not Sarcastic Words"
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Figure 4: Results analysis for Reddit Dataset"
},
"TABREF2": {
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">Combined Context</td><td colspan=\"2\">Response(R)</td></tr><tr><td>Models</td><td colspan=\"2\">and Response (CR)</td><td/><td/></tr><tr><td/><td colspan=\"2\">Doc2Vec Tfidf</td><td colspan=\"2\">Doc2Vec Tfidf</td></tr><tr><td>LR</td><td>0.5061</td><td>0.552</td><td>0.497</td><td>0.597</td></tr><tr><td>RF</td><td>0.4947</td><td>0.539</td><td>0.505</td><td>0.564</td></tr><tr><td>XGB</td><td>0.4965</td><td>0.565</td><td>0.500</td><td>0.582</td></tr><tr><td>SVC</td><td>0.5029</td><td>0.538</td><td>0.493</td><td>0.587</td></tr><tr><td>NB</td><td>0.4977</td><td>0.549</td><td>0.493</td><td>0.595</td></tr></table>",
"text": "Accuracies of the models based on the feature extraction of the utterance of combined and isolated text -Twitter data",
"html": null,
"type_str": "table"
},
"TABREF3": {
"num": null,
"content": "<table/>",
"text": "Accuracies of the models based on the feature extraction of the utterance of combined and isolated text -Reddit data",
"html": null,
"type_str": "table"
},
"TABREF5": {
"num": null,
"content": "<table><tr><td>Type</td><td colspan=\"2\">Precision Recall F1 score</td></tr><tr><td>BERT(C)</td><td>0.587</td><td>0.589 0.585</td></tr><tr><td>BERT(CR)</td><td>0.493</td><td>0.492 0.477</td></tr><tr><td>BERT(R)</td><td>0.679</td><td>0.679 0.679</td></tr><tr><td>BERT(PR)</td><td>0.638</td><td>0.638 0.637</td></tr><tr><td>LR(CR)</td><td>0.526</td><td>0.526 0.526</td></tr><tr><td>LR(R)</td><td>0.563</td><td>0.564 0.563</td></tr><tr><td>NB(R)</td><td>0.557</td><td>0.557 0.557</td></tr><tr><td>SVC(R)</td><td>0.551</td><td>0.551 0.550</td></tr><tr><td>XGB(R)</td><td>0.539</td><td>0.543 0.528</td></tr><tr><td>SVC(CR)</td><td>0.516</td><td>0.516 0.516</td></tr><tr><td>XGB(CR)</td><td>0.544</td><td>0.544 0.544</td></tr></table>",
"text": "Results for Twitter Dataset",
"html": null,
"type_str": "table"
},
"TABREF6": {
"num": null,
"content": "<table/>",
"text": "Results for Reddit Dataset",
"html": null,
"type_str": "table"
}
}
}
}