ACL-OCL / Base_JSON /prefixF /json /figlang /2020.figlang-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:42:41.269948Z"
},
"title": "Neural Sarcasm Detection using Conversation Context",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Jaiswal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TCS Research",
"location": {
"settlement": "New Delhi",
"country": "India"
}
},
"email": "nikhil.jais@tcs.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Social media platforms and discussion forums such as Reddit, Twitter, etc. are filled with figurative languages. Sarcasm is one such category of figurative language whose presence in a conversation makes language understanding a challenging task. In this paper, we present a deep neural architecture for sarcasm detection. We investigate various pre-trained language representation models (PLRMs) like BERT, RoBERTa, etc. and fine-tune it on the Twitter dataset 1. We experiment with a variety of PLRMs either on the twitter utterance in isolation or utilizing the contextual information along with the utterance. Our findings indicate that by taking into consideration the previous three most recent utterances, the model is more accurately able to classify a conversation as being sarcastic or not. Our best performing ensemble model achieves an overall F1 score of 0.790, which ranks us second 2 on the leaderboard of the Sarcasm Shared Task 2020.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Social media platforms and discussion forums such as Reddit, Twitter, etc. are filled with figurative languages. Sarcasm is one such category of figurative language whose presence in a conversation makes language understanding a challenging task. In this paper, we present a deep neural architecture for sarcasm detection. We investigate various pre-trained language representation models (PLRMs) like BERT, RoBERTa, etc. and fine-tune it on the Twitter dataset 1. We experiment with a variety of PLRMs either on the twitter utterance in isolation or utilizing the contextual information along with the utterance. Our findings indicate that by taking into consideration the previous three most recent utterances, the model is more accurately able to classify a conversation as being sarcastic or not. Our best performing ensemble model achieves an overall F1 score of 0.790, which ranks us second 2 on the leaderboard of the Sarcasm Shared Task 2020.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sarcasm can be defined as a communicative act of intentionally using words or phrases which tend to transform the polarity of a positive utterance into its negative counterpart and vice versa. The significant increase in the usage of social media channels has generated content that is sarcastic and ironic in nature. The apparent reason for this is that social media users tend to use various figurative language forms to convey their message. The detection of sarcasm is thus vital for several NLP applications such as opinion minings, sentiment analysis, etc (Maynard and Greenwood, 2014) . This leads to 1 The dataset is provided by the organizers of Sarcasm Shared Task FigLang-2020 2 We are ranked 8th with an F1 score of 0.702 on the Reddit dataset leaderboard using the same approach. But we do not describe those results here as we could not test all our experiments within the timing constraints of the Shared Task. a considerable amount of research in the sarcasm detection domain among the NLP community in recent years.",
"cite_spans": [
{
"start": 562,
"end": 591,
"text": "(Maynard and Greenwood, 2014)",
"ref_id": "BIBREF14"
},
{
"start": 608,
"end": 609,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 670,
"end": 687,
"text": "Task FigLang-2020",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Shared Task on Sarcasm Detection 2020 aims to explore various approaches for sarcasm detection in a given textual utterance. Specifically, the task is to understand how much conversation context is needed or helpful for sarcasm detection. Our approach for this task focuses on utilizing various state-of-the-art PLRMs and fine-tuning it to detect whether a given conversation is sarcastic. We apply an ensembling strategy consisting of models trained on different length conversational contexts to make more accurate predictions. Our best performing model (Team name -nclabj) achieves an F1 score of 0.790 on the test data in the CoadaLab evaluation platform.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The dataset assigned for this task is collected from the popular social media platform, Twitter. Each training data contains the following fields: \"label\" (i.e., \"SARCASM\" or \"NOTSARCASM\"), \"response\" (the Tweet utterance), \"context\" (i.e., the conversation context of the \"response\"). Our objective here is to take as input a response along with its optional conversational context and predict whether the response is sarcastic or not. This problem can be modeled as a binary classification task. The predicted response on the test set is evaluated against the true label. Three performance metrics, namely, Precision, Recall, and F1 Score are used for final evaluation. tion problem (Ghosh et al., 2015) or considering sarcasm as a contrast between a positive sentiment and negative situation (Riloff et al., 2013; Maynard and Greenwood, 2014; Joshi et al., 2015 Joshi et al., , 2016b Ghosh and Veale, 2016) . Recently, few works have taken into account the additional context information along with the utterance. (Wallace et al., 2014) demonstrate how additional contextual information beyond the utterance is often necessary for humans as well as computers to identify sarcasm. (Schifanella et al., 2016) propose a multi-modal approach to combine textual and visual features for sarcasm detection. (Joshi et al., 2016a) model sarcasm detection as a sequence labeling task instead of a classification task. (Ghosh et al., 2017) investigated that the conditional LSTM network (Rockt\u00e4schel et al., 2015) and LSTM networks with sentence-level attention on context and response achieved significant improvement over the LSTM model that reads only the response. Therefore, the new trend in the field of sarcasm detection is to take into account the additional context information along with the utterance. The objective of this Shared Task is to investigate how much of the context information is necessary to classify an utterance as being sarcastic or not.",
"cite_spans": [
{
"start": 685,
"end": 705,
"text": "(Ghosh et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 795,
"end": 816,
"text": "(Riloff et al., 2013;",
"ref_id": "BIBREF16"
},
{
"start": 817,
"end": 845,
"text": "Maynard and Greenwood, 2014;",
"ref_id": "BIBREF14"
},
{
"start": 846,
"end": 864,
"text": "Joshi et al., 2015",
"ref_id": "BIBREF9"
},
{
"start": 865,
"end": 886,
"text": "Joshi et al., , 2016b",
"ref_id": "BIBREF10"
},
{
"start": 887,
"end": 909,
"text": "Ghosh and Veale, 2016)",
"ref_id": "BIBREF2"
},
{
"start": 1017,
"end": 1039,
"text": "(Wallace et al., 2014)",
"ref_id": "BIBREF23"
},
{
"start": 1183,
"end": 1209,
"text": "(Schifanella et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 1303,
"end": 1324,
"text": "(Joshi et al., 2016a)",
"ref_id": "BIBREF7"
},
{
"start": 1411,
"end": 1431,
"text": "(Ghosh et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 1479,
"end": 1505,
"text": "(Rockt\u00e4schel et al., 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Description",
"sec_num": "2"
},
{
"text": "We describe our proposed system for sarcasm detection in this section. We frame this problem as a binary classification task and apply a transfer learning approach to classify the tweet as either sarcastic or not. We experiment with several state of the art PLRMs like BERT (Devlin et al., 2019) , RoBERTa (Liu et al., 2019) , as well as pre-trained embeddings representations models such as ELMo (Peters et al., 2018) , USE (Cer et al., 2018) , etc. and fine-tune it on the assigned Twitter dataset. We briefly review these models in subsections 4. For fine-tuning, we add additional dense layers and train the entire model in an end to end manner. Figure 1 illustrates one such approach for fine-tuning a RoBERTa model. We sequentially unfreeze the layers with each ongoing epoch. We apply a model ensembling strategy called \"majority voting\", as shown in Figure 2 to come out with our final predictions on the test data. In this ensemble technique, we take the prediction of several models and choose the label predicted by the maximum number of models. ELMo introduces a method to obtain deep contextualized word representation. Here, the researchers build a bidirectional Language model (biLM) with a two-layered bidirectional LSTM architecture and obtain the word vectors through a learned function of the internal states of biLM. This model is trained on 30 million sentence corpus, and thus the word embeddings obtained using this model can be used to increase the classification performance in several NLP tasks. For our task, we utilize the ELMo embeddings to come out with a feature representation of the words in the input utterance and pass it through three dense layers to perform the binary classification task.",
"cite_spans": [
{
"start": 274,
"end": 295,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 306,
"end": 324,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 397,
"end": 418,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 425,
"end": 443,
"text": "(Cer et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 650,
"end": 656,
"text": "Figure",
"ref_id": null
},
{
"start": 858,
"end": 866,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Description",
"sec_num": "4"
},
{
"text": "USE presents an approach to create embedding vector representation of a complete sentence to specifically target transfer learning to other NLP tasks. There are two variants of USE based on trade-offs in compute resources and accuracy. The first variant uses an encoding sub-graph of the transformer architecture to construct sentence embeddings (Vaswani et al., 2017) and achieve higher performance figures. The second variant is a light model that uses a deep averaging network (DAN) (Iyyer et al., 2015) in which first the input embed-ding for words and bi-grams are averaged and then passed through a feedforward neural network to obtain sentence embeddings. We utilize the USE embeddings from the Transformer architecture on our data and perform the classification task by passing them through three dense layers.",
"cite_spans": [
{
"start": 346,
"end": 368,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 486,
"end": 506,
"text": "(Iyyer et al., 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Universal Sentence Encoder (USE)",
"sec_num": "4.2"
},
{
"text": "BERT, a Transformer language model, achieved state-of-the-art results on eleven NLP tasks. There are two pre-training tasks on which BERT is trained on. In the first task, also known as masked language modeling (MLMs), 15% of words are randomly masked in each sequence, and the model is used to predict the masked words. The second task, also known as the next sentence prediction (NSP), in which given two sentences, the model tries to predict whether one sentence is the next sentence of the other. Once the above pre-training phase is completed, this can be extended for classification related task with minimal changes. This is also known as BERT fine-tuning, which we apply for our sarcasm detection task. In the paper, two models (BERT BASE & BERT LARGE ) are released depending on the number of transformer blocks (12 vs. 24), attention heads (12 vs. 16), and hidden units size (768 vs. 1024). We experiment with BERT LARGE model for our task, since it generally performs better as compared to the BERT BASE model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional Encoder Representations from Transformers (BERT)",
"sec_num": "4.3"
},
{
"text": "RoBERTa presents improved modifications for training BERT models. The modifications are as follows: 1. training the model for more epochs (500K vs. 100K) 2. using bigger batch sizes (around 8 times) 3. training on more data (160GB vs. 16 GB). Apart from the above parameters changes, byte-level BPE vocabulary is used instead of character-level vocabulary. The dynamic masking technique is used here instead of the static masking used in BERT. Also, the NSP task is removed following some recent works that have questioned the necessity of the NSP loss (Sun et al., 2019; Lample and Conneau, 2019) . Summarizing, RoBERTa is trained with dynamic masking, sentences without NSP loss, large batches, and a larger byte-level BPE. ",
"cite_spans": [
{
"start": 553,
"end": 571,
"text": "(Sun et al., 2019;",
"ref_id": "BIBREF21"
},
{
"start": 572,
"end": 597,
"text": "Lample and Conneau, 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Robustly Optimized BERT Approach (RoBERTa)",
"sec_num": "4.4"
},
{
"text": "The dataset assigned for this task is collected from Twitter. There are 5,000 English Tweets for training, and 1,800 English Tweets for testing purpose.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Preparation",
"sec_num": "5.1"
},
{
"text": "We use 10% of the training data for the validation set to tune the hyper-parameters of our model.We apply several preprocessing steps to clean the given raw data. Apart from the standard preprocessing steps such as lowercasing the letters, removal of punctuations and emojis, expansion of contractions, etc., we remove the usernames from the tweets. Also, since hashtags generally consist of phrases in CamelCase letters, we split them into individual words since they carry the essential information about the tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Preparation",
"sec_num": "5.1"
},
{
"text": "To incorporate contextual information along with a given tweet, we prepare the data in the manner, as shown in Table 1 . For data in which only the previous two turns are available, for them, only those two turns are considered in CON3 & CON case illustrated in Table 1 . We fix the maximum sequence length based on the coverage of the data (greater than 90th percentile) in the training and test set. This sequence length is determined by considering each word as a single token.",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 118,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 262,
"end": 269,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset Preparation",
"sec_num": "5.1"
},
{
"text": "Here, we describe a detailed set up of our experiments and the different hyper-parameters of our models for better reproducibility. We experiment with various advanced state-of-the-art methodologies such as ELMo, USE, BERT, and RoBERTa. We use the validation set to tune the hyper-parameters. We use Adam (Kingma and Ba, 2014) optimizer in all our experiments. We use dropout regularization (Srivastava et al., 2014) and early stopping (Yao et al., 2007) to prevent overfitting. We use a batch size of {2, 4, 8, 16} depending on the model size and the sequence length.",
"cite_spans": [
{
"start": 305,
"end": 326,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF11"
},
{
"start": 391,
"end": 416,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF20"
},
{
"start": 436,
"end": 454,
"text": "(Yao et al., 2007)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "5.2"
},
{
"text": "Firstly, the data is prepared as mentioned in subsection 5.1. For fine-tuning ELMo, USE, and BERT LARGE models, we use the module from Ten- sorflow Hub 345 and wrap it in a Keras Lambda layer whose weights are also trained during the fine-tuning process. We add three dense layers {512, 256, 1} with a dropout of 0.5 between these layers. The relu activation function is being applied between the first two layers whereas sigmoid is used at the final layer. ELMo and USE models are trained for 20 epochs while BERT LARGE is trained for 5 epochs. During the training, only the best model based on the minimum validation loss was saved by using the Keras ModelCheckpoint callback. Instead of using a threshold value of 0.5 for binary classification, a whole range of threshold values from 0.1 to 0.9 with an interval of 0.01 is experimented on the validation set. The threshold value for which the highest validation accuracy is obtained is selected as the final threshold and is being applied on the test set to get the test class predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "5.2"
},
{
"text": "For fine-tuning RoBERTa LARGE model, we use the fastai (Howard and Gugger, 2020) framework and utilize PLRMs from HuggingFace's Transformers library (Wolf et al., 2019) . HuggingFace library contains a collection of state-of-the-art PLRMs which is being widely used by the researcher and practitioner communities. Incorporating Hugging-Face library with fastai allows us to utilize powerful fastai capabilities such as Discriminate Learning Rate, Slanted Triangular Learning Rate and Gradual Unfreezing Learning Rate on the powerful pretrained Transformer models. For our experiment, first of all, we extract the pooled output (i.e. the last layer hidden-state of the first token of the sequence (CLS token) further processed by a linear layer and a Tanh activation function). It is then passed through a linear layer with two neurons followed by a softmax activation function. We use a learning rate of 1e-5 and utilize the \"1cycle\" learning rate policy for super-convergence, as suggested by (Smith, 2015) . We gradually unfreeze the layers and train on a 1cycle manner. After unfreezing the last three layers, we unfreeze the entire layers and train in the similar 1cycle manner. We stop the training when the validation accuracy does not improve consecutively for three epochs.",
"cite_spans": [
{
"start": 149,
"end": 168,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 994,
"end": 1007,
"text": "(Smith, 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "5.2"
},
{
"text": "We use a simple ensembling technique called majority voting to ensemble the predictions of different models to further improvise the test accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "5.2"
},
{
"text": "Here, we compare and discuss the results of our experiments. First, we summarize the results of the individual model on the test set using different variants of data in Tables 2 & 3. From Table 2 , we can observe that adding context information of specific lengths helps in improving the classification performance in almost all the models. USE results are better as compared to the ELMo model since Transformers utilized in the USE are able to handle sequential data comparatively better than that as LSTMs being used in ELMo. On the other hand, BERT LARGE outperforms USE with the increase in the length of context history. The highest test accuracy by BERT LARGE is obtained on the CON3 variant of data which depicts the fact that adding most recent three turns of context history helps the model to classify more accurately. This hypothesis is further supported from the experiments when a similar trend occurs with the RoBERTa LARGE model. Since the results obtained by RoBERTa are comparatively better than other models, we train this model once again on the same train and validation data with different weight initialization. By doing this, we can have a variety of models to build our final ensemble architecture. The evaluation metrics used are Precision (Pr), Recall (Re), F1-score (F1).",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 195,
"text": "Tables 2 & 3. From Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Error Analysis",
"sec_num": "5.3"
},
{
"text": "As observed in Table 3 , RoBERTa fine-tuned on the CON3 variant of data outperforms all other approaches. In the case of fine-tuning PLRMs like BERT LARGE & RoBERTa LARGE on this data, we can observe the importance of most recent three turns of context history. From the experiments, we conclude that on increasing the context history along with the utterance, the model can learn a better representation of the utterance and can classify the correct class more accurately. Finally, RoBERTa model outperforms every other model because this model is already an optimized and improved version of the BERT model. Table 4 summarizes the results of our various ensemble models. For ensembling, we choose different variants of best performing models on the test data and apply majority voting on it to get the final test predictions. We experiment with several combinations of the models and report here the results of some of the best performing ensembles. We can observe that the ensemble model consisting of top three individual models gave us the best results.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 610,
"end": 617,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results and Error Analysis",
"sec_num": "5.3"
},
{
"text": "In this work, we have presented an effective methodology to tackle the sarcasm detection task on the twitter dataset by framing it as a binary classification problem. We showed that by finetuning PLRMs on a given utterance along with its specific length context history, we could successfully classify the utterance as being sarcastic or not. We experimented with different length context history and concluded that by taking into account the most recent three conversation turns, the model was able to obtain the best results. The fine-tuned RoBERTa LARGE model outperformed every other experimented models in terms of precision, recall, and F1 score. We also demonstrated that we could obtain a significant gain in the performance by using a simple ensembling technique called majority voting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "6"
},
{
"text": "In the future, we would like to explore with these PLRMs on other publicly available datasets. We also aim to dive deep into the context history information and derive insights about the contextual part, which helps the model in improvising the classification result. We also wish to investigate more complex ensemble techniques to observe the performance gain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "6"
},
{
"text": "Related WorkVarious attempts have been made for sarcasm detection in recent years(Joshi et al., 2017). Researchers have approached this task through different methodologies, such as framing it as a sense disambigua-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://tfhub.dev/google/elmo/2 4 https://tfhub.dev/google/ universal-sentence-encoder-large/35 https://tfhub.dev/tensorflow/bert_en_ uncased_L-24_H-1024_A-16/1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sheng-Yi",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "Rhomni",
"middle": [],
"last": "St",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Guajardo-Cespedes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Con- stant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder. CoRR, abs/1803.11175.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin et al. 2019. BERT: pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Fracking sarcasm using neural network",
"authors": [
{
"first": "Aniruddha",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Veale",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.13140/RG.2.2.16560.15363"
]
},
"num": null,
"urls": [],
"raw_text": "Aniruddha Ghosh and Tony Veale. 2016. Fracking sar- casm using neural network.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The role of conversation context for sarcasm detection in online interactions",
"authors": [
{
"first": "Debanjan",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"Richard"
],
"last": "Fabbri",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
}
],
"year": 2017,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Debanjan Ghosh, Alexander Richard Fabbri, and Smaranda Muresan. 2017. The role of conversation context for sarcasm detection in online interactions. CoRR, abs/1707.06226.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Sarcastic or not: Word embeddings to predict the literal or sarcastic meaning of words",
"authors": [
{
"first": "Debanjan",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1003--1012",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Debanjan Ghosh, Weiwei Guo, and Smaranda Muresan. 2015. Sarcastic or not: Word embeddings to predict the literal or sarcastic meaning of words. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 1003- 1012, Lisbon, Portugal. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fastai: A layered api for deep learning",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Gugger",
"suffix": ""
}
],
"year": 2020,
"venue": "Information",
"volume": "11",
"issue": "2",
"pages": "",
"other_ids": {
"DOI": [
"10.3390/info11020108"
]
},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sylvain Gugger. 2020. Fas- tai: A layered api for deep learning. Information, 11(2):108.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Deep unordered composition rivals syntactic methods for text classification",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Manjunatha",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1681--1691",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum\u00e9 III. 2015. Deep unordered compo- sition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1681-1691, Beijing, China. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Harnessing sequence labeling for sarcasm detection in dialogue from tv series",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Carman",
"suffix": ""
},
{
"first": "Vaibhav",
"middle": [],
"last": "Tripathi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/K16-1015"
]
},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Pushpak Bhattacharyya, Mark Carman, and Vaibhav Tripathi. 2016a. Harnessing sequence labeling for sarcasm detection in dialogue from tv series 'friends'.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic sarcasm detection: A survey",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"J"
],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Comput. Surv",
"volume": "",
"issue": "5",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3124420"
]
},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Pushpak Bhattacharyya, and Mark J. Car- man. 2017. Automatic sarcasm detection: A survey. ACM Comput. Surv., 50(5).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Harnessing context incongruity for sarcasm detection",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Vinita",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "757--762",
"other_ids": {
"DOI": [
"10.3115/v1/P15-2124"
]
},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Vinita Sharma, and Pushpak Bhat- tacharyya. 2015. Harnessing context incongruity for sarcasm detection. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 2: Short Papers), pages 757-762, Beijing, China. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Are word embedding-based features useful for sarcasm detection? CoRR",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Vaibhav",
"middle": [],
"last": "Tripathi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"James"
],
"last": "Carman",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Vaibhav Tripathi, Kevin Patel, Pushpak Bhattacharyya, and Mark James Carman. 2016b. Are word embedding-based features useful for sar- casm detection? CoRR, abs/1610.00883.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. International Conference on Learning Representations.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Crosslingual language model pretraining",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. CoRR, abs/1901.07291.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Who cares about sarcastic tweets? investigating the impact of sarcasm on sentiment analysis",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Maynard",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Greenwood",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)",
"volume": "",
"issue": "",
"pages": "4238--4243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana Maynard and Mark Greenwood. 2014. Who cares about sarcastic tweets? investigating the im- pact of sarcasm on sentiment analysis. In Proceed- ings of the Ninth International Conference on Lan- guage Resources and Evaluation (LREC'14), pages 4238-4243, Reykjavik, Iceland. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, et al. 2018. Deep contextualized word representations. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Sarcasm as contrast between a positive sentiment and negative situation",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Ashequl",
"middle": [],
"last": "Qadir",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Surve",
"suffix": ""
},
{
"first": "Lalindra De",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gilbert",
"suffix": ""
},
{
"first": "Ruihong",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "704--714",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 704-714, Seattle, Washing- ton, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Reasoning about entailment with neural attention",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u00fd",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Rockt\u00e4schel, Edward Grefenstette, Karl Hermann, Tom\u00e1\u0161 Ko\u010disk\u00fd, and Phil Blunsom. 2015. Reason- ing about entailment with neural attention.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Detecting sarcasm in multimodal social platforms",
"authors": [
{
"first": "Rossano",
"middle": [],
"last": "Schifanella",
"suffix": ""
},
{
"first": "Paloma",
"middle": [],
"last": "Juan",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "Liangliang",
"middle": [],
"last": "Cao",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2964284.2964321"
]
},
"num": null,
"urls": [],
"raw_text": "Rossano Schifanella, Paloma Juan, Joel Tetreault, and Liangliang Cao. 2016. Detecting sarcasm in multi- modal social platforms.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "No more pesky learning rate guessing games",
"authors": [
{
"first": "Leslie",
"middle": [
"N"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leslie N. Smith. 2015. No more pesky learning rate guessing games. CoRR, abs/1506.01186.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Dropout: a simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey E Hinton, et al. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "ERNIE: enhanced representation through knowledge integration",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yu-Kun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xuyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Danxiang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Hao Tian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. ERNIE: en- hanced representation through knowledge integra- tion. CoRR, abs/1904.09223.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Kaiser",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, undefine- dukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st Interna- tional Conference on Neural Information Processing Systems, NIPS'17, page 6000-6010, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Humans require context to infer ironic intent (so computers probably do, too)",
"authors": [
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Do Kook Choe",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Kertz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "512--516",
"other_ids": {
"DOI": [
"10.3115/v1/P14-2084"
]
},
"num": null,
"urls": [],
"raw_text": "Byron C. Wallace, Do Kook Choe, Laura Kertz, and Eugene Charniak. 2014. Humans require context to infer ironic intent (so computers probably do, too). In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 512-516, Baltimore, Mary- land. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "On early stopping in gradient descent learning",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Lorenzo",
"middle": [],
"last": "Rosasco",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Caponnetto",
"suffix": ""
}
],
"year": 2007,
"venue": "Constructive Approximation",
"volume": "26",
"issue": "",
"pages": "289--315",
"other_ids": {
"DOI": [
"10.1007/s00365-006-0663-2"
]
},
"num": null,
"urls": [],
"raw_text": "Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. 2007. On early stopping in gradient descent learn- ing. Constructive Approximation, 26:289-315.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "The proposed methodology to fine-tune a RoBERTa modelFigure 2: The majority voting ensemble methodology consisting of three sample models 4.1 Embeddings from Language Models (ELMo)",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>5 Experiments and Results</td></tr></table>",
"html": null,
"text": "Different Variants of Data"
},
"TABREF3": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Model</td><td/><td colspan=\"2\">RoBERTa LARGE A</td><td/><td/><td colspan=\"2\">RoBERTa LARGE B</td><td/></tr><tr><td>Data</td><td>ID</td><td>Pr</td><td>Re</td><td>F1</td><td>ID</td><td>Pr</td><td>Re</td><td>F1</td></tr><tr><td>RESP</td><td>1</td><td>0.742</td><td>0.745</td><td>0.742</td><td>6</td><td>0.744</td><td>0.744</td><td>0.744</td></tr><tr><td>CON1</td><td>2</td><td>0.751</td><td>0.756</td><td>0.750</td><td>7</td><td>0.752</td><td>0.753</td><td>0.751</td></tr><tr><td>CON2</td><td>3</td><td>0.751</td><td>0.751</td><td>0.750</td><td>8</td><td>0.763</td><td>0.764</td><td>0.763</td></tr><tr><td>CON3</td><td>4</td><td>0.773</td><td>0.778</td><td>0.772</td><td>9</td><td>0.766</td><td>0.766</td><td>0.766</td></tr><tr><td>CON</td><td>5</td><td>0.759</td><td>0.760</td><td>0.759</td><td>10</td><td>0.757</td><td>0.757</td><td>0.757</td></tr></table>",
"html": null,
"text": "We compare the fine-tuning of different individual models ELMo and USE and BERT LARGE on different variants of Twitter test data. The metric Precision (Pr), Recall (Re) and F1 Score (F1) denotes the test set results."
},
"TABREF4": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>Description</td><td>Models IDs</td><td>Pr</td><td>Re</td><td>F1</td></tr><tr><td>Top 3 RoBERTa A</td><td>3, 4, 5</td><td>0.773</td><td>0.775</td><td>0.772</td></tr><tr><td>Top 3 RoBERTa B</td><td>8, 9, 10</td><td>0.778</td><td>0.779</td><td>0.778</td></tr><tr><td>Top 3 RoBERTa A &amp; B</td><td>4, 8, 9</td><td>0.790</td><td>0.792</td><td>0.790</td></tr><tr><td>Top 5 RoBERTa A &amp; B</td><td>4, 5, 8, 9, 10</td><td>0.788</td><td>0.789</td><td>0.787</td></tr></table>",
"html": null,
"text": "We compare the fine-tuning of RoBERTa LARGE model on different variants of Twitter test data. We fine-tune this model twice on the same train and validation data with different weight initialization. We represent each of these results with a unique ID to utilize them in the ensemble network."
},
"TABREF5": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "We compare the ensembling results based on several combinations of RoBERTa LARGE models. Bold font denotes the best results."
}
}
}
}