ACL-OCL / Base_JSON /prefixS /json /semeval /2020.semeval-1.142.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:13:22.563087Z"
},
"title": "XSYSIGMA at SemEval-2020 Task 7: Method for Predicting Headlines' Humor Based on Auxiliary Sentences with EI-Bert",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Ma",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Shu-Yi",
"middle": [],
"last": "Xie",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Mei-Zhi",
"middle": [],
"last": "Jin",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lian-Xin",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yang",
"middle": [],
"last": "Mo",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jian-Ping",
"middle": [],
"last": "Shen",
"suffix": "",
"affiliation": {},
"email": "shenjianping324@pingan.com.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes xsysigma team's system for SemEval 2020 Task 7: Assessing the Funniness of Edited News Headlines. The target of this task is to assess the funniness changes of news headlines after minor editing and is divided into two subtasks: Subtask 1 is a regression task to detect the humor intensity of the sentence after editing; and Subtask 2 is a classification task to predict funnier of the two edited versions of an original headline. In this paper, we only report our implement of Subtask 2. We first construct sentence pairs with different features for Enhancement Inference Bert(EI-Bert)'s input. We then conduct data augmentation strategy and Pseudo-Label method. After that, we apply feature enhancement interaction on the encoding of each sentence for classification with EI-Bert. Finally, we apply weighted fusion algorithm to the logits results which obtained by different pre-trained models. We achieve 64.5% accuracy in subtask2 and rank the first and the fifth in dev and test dataset 1 , respectively.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes xsysigma team's system for SemEval 2020 Task 7: Assessing the Funniness of Edited News Headlines. The target of this task is to assess the funniness changes of news headlines after minor editing and is divided into two subtasks: Subtask 1 is a regression task to detect the humor intensity of the sentence after editing; and Subtask 2 is a classification task to predict funnier of the two edited versions of an original headline. In this paper, we only report our implement of Subtask 2. We first construct sentence pairs with different features for Enhancement Inference Bert(EI-Bert)'s input. We then conduct data augmentation strategy and Pseudo-Label method. After that, we apply feature enhancement interaction on the encoding of each sentence for classification with EI-Bert. Finally, we apply weighted fusion algorithm to the logits results which obtained by different pre-trained models. We achieve 64.5% accuracy in subtask2 and rank the first and the fifth in dev and test dataset 1 , respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Humor detection is a significant task in natural language processing. Most of the available humor datasets treat it as a binary classification task (Khodak et al., 2017; Davidov et al., 2010; Barbieri et al., 2014) , i.e., examining whether a particular text is funny. It is interesting to study whether a short editing of a text can turn it from non-funny to funny. In this paper, we examine whether the headline is funniness or which word substitution is more funniness after a short editing. Such research helps us to focus on the humorous effect of word changes. Our goal is to determine how the machine understands the humor generated by such brief editing and to improve the accuracy of model prediction by different strategies, including rewriting the input sentences of model, enhancing local inference of features and applying the strategies of data augmentation and model ensemble.",
"cite_spans": [
{
"start": 148,
"end": 169,
"text": "(Khodak et al., 2017;",
"ref_id": null
},
{
"start": 170,
"end": 191,
"text": "Davidov et al., 2010;",
"ref_id": "BIBREF3"
},
{
"start": 192,
"end": 214,
"text": "Barbieri et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The competition dataset comes from Humicroedit (Hossain et al., 2019) , a novel dataset for research in computational humor. Each edited text headline is labeled into a score of 0-3 by 5 judges with an average final score. The competition consists of two subtasks: Subtask 1 is to predict the average funniness of the edited headlines given the original headlines and the edited headlines and Subtask 2 is to predict which edited text is more funniness in comparison given the original headlines and two edited ones. This competition is significant because it is helpful for the task of generating humorous texts (Hossain et al., 2017) , which can be applied to chatbots and news headline generation.",
"cite_spans": [
{
"start": 47,
"end": 69,
"text": "(Hossain et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 613,
"end": 635,
"text": "(Hossain et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we only report our implementation on Subtask 2. We conduct several key data preprocessing, including data cleaning, constructing different sentence pair inputs, delivering semi-supervised clustering of the replacement words, and adding clustering labels as the features of the model inputs. After that, we apply Bert to obtain the word embeddings of the two input sentences and calculate the difference of the feature information with soft alignment. Next, we enhance the local inference information and concatenate them into a sequence to the Softmax layer for classification. Finally, we apply data augmentation and the Pseudo-Label iterative learning strategy to enrich the data information from the embeddings of three pre-trained models, Bert, XLNET, and RoBerta. Cross validation and weighted voting ensemble are conducted to improve the model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contributions of this paper can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) We creatively construct sentence pair inputs with different sentence representations and generate additional feature representations from SLPA algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) We propose a new model framework EI-Bert which can improve the feature information interaction of the Bert's output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, humor detection has become a hot research topic. Khodak et al. introduces the Self-Annotated Reddit Corpus (SARC), a large corpus for training and evaluating the task of sarcasm detection. Davidov et al.(2010) apply semi-supervised learning technique to identify sarcasms on two very different datasets and discuss the differences between the datasets and the algorithms. Barbieri et al.(2014) investigate the automatic method of detecting irony and humor in social networks. They cast the problem into a classification problem and propose a rich set of features and text representation to train the classifier. Kiddon et al.(2011) detect double-entendres, address this problem in a classification approach that includes features that model those two characteristics.",
"cite_spans": [
{
"start": 199,
"end": 219,
"text": "Davidov et al.(2010)",
"ref_id": "BIBREF3"
},
{
"start": 382,
"end": 403,
"text": "Barbieri et al.(2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As a powerful semantic information feature extractor proposed by Devlin et al.(2018) , Bert has two significant points that we can utilize: (1) Feature extraction with Transformer encoder, and training with MLM and NSP strategy pre-training; (2) The two-stage model of pre-training for large data scale pre-training and fine-tuning training for specific tasks. It demonstrates the effectiveness of bidirectional pretraining for language representation. Unlike the previously used unidirectional language models for pretraining, Bert applies a masked language model to achieve pre-trained deep bidirectional representations, which is the first representation model based on fine-tuning and achieves the most advanced performance on a large number of sentence-level and token-level tasks, also stronger than many task-oriented architectures.",
"cite_spans": [
{
"start": 65,
"end": 84,
"text": "Devlin et al.(2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Nowadays, researchers have proposed to construct auxiliary sentences in NLP tasks to enrich the input of Bert. For example, Sun et al. 2019 3 System overview 3.1 Auxiliary sentences and data augmentation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the system, we first conduct data cleaning, including word stem extraction, lexical reduction, spelling error correction, and punctuation processing. We construct auxiliary sentences by data augmentation through the replacement word rewriting and Pseudo-Label data generation via semi-supervised learning. Then, we form the auxiliary sentences as sentence pair as the input of Bert. As illustrated in Fig. 1 , we construct three types of sentence pair inputs, where TextA is the first input sentence, TextB is the second input sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 404,
"end": 410,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As depicted in Fig. 1 , given the original input sentence, \"I am done fed up with California , some conservatives look to texas\", shown in the lower left corner. The edited input sentence is \"I am done fed up with California , some vagrants look to texas\". The word \"conservatives\" is changed to \"vagrants\" which lies in TextA. Similarly, the word \"California\" is changed to \"cakepan\" which lies in TextB. Next, considering the high repetition of information between the original sentence and the edited sentence, Figure 1 : Three different sentence pair inputs we construct new phrases, \"We replaced conservatives with vagrants\" and \"We replaced California with pancakes\", to make better use of the relationship between the sentences in Input2.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 21,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 514,
"end": 522,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Finally, we group the replacement words into 50 clusters via the SLPA (Speaker-listener Label Propagation Algorithm), an extension of the LPA algorithm for community discovery (Yuchen et al., 2019) . The input of the model is shown as input3, we add the clustering information of the replacement words.",
"cite_spans": [
{
"start": 176,
"end": 197,
"text": "(Yuchen et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Furthermore, we perform data augmentation. As shown in Fig. 2 , TextA and TextB are the input sentences with minor editing. The score comes from the dataset which annotated from expert. We select the score of funniness greater than a certain threshold. We set the threshold to 0.8 to fit the dataset. Since the score of TextA and TextB are both greater than 0.8, then two other sentence pairs can be constructed. One is that TextA is unchanged, and the corresponding sentence adopt the original sentence which corresponds to Augmentation1 in the figure, since the score of TextA is greater than the score of TextB, the label value is 1. Similarly, TextB is unchanged, and the corresponding sentence adopt the original sentence which corresponds to Augmentation2 in the figure. The thresholds can be adopted slightly larger, which can make the sentence to semantic information gap larger. As shown in Fig. 3 , we consider improve our model's output by using an existing model to label non-labeled data and select the class with the largest predicted probability as Pseudo-Label. Then, we add the weak label data to the training process and add the weak loss to the original CE loss. The week loss is defined as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 61,
"text": "Fig. 2",
"ref_id": "FIGREF1"
},
{
"start": 900,
"end": 906,
"text": "Fig. 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L w = 1 n n j=1 m i=1 L(y j i , f j i ),",
"eq_num": "(1)"
}
],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "where n is the number of mini-batch in unlabeled data for SGD, m is the number of classes, y j i is the pseudo-label of that for unlabeled data, f j i is the output units of j's sample in unlabeled data. We iterate not only on the model, but also on the predicted Pseudo-Label, which is essentially a semi-supervised learning. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Natural Language Inference is mainly to determine the relationship between two sentences. In order to compare the funniness of the two sentences after editing, we naturally transform the Sentence Pair Classification task into a textual inference task to infer the relationship between the two input sentences. The model applies the Bert pre-trained model for encoding. It calculates the difference with sentence encoding information. We then concatenate this encoding with the original vectors to enhance the difference. Finally, we send the encoding vectors to the Max-Pooling and Softmax layers for classification. We name this procedure as Enhancement Inference Bert (EI-Bert), which is shown in Fig. 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 699,
"end": 705,
"text": "Fig. 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Description",
"sec_num": "3.2"
},
{
"text": "We construct and augment the sentence pairs as in Fig. 1 and Fig. 2 into the Bert model and obtain the encoding information of the words in each sentence from the Bert's sequence output. We obtain the vector\u0101 i ,b j from Bert, where\u0101 i is the input TextA's i th token's representation vectors from Bert's last hidden state vectors, andb j is the input TextB's j th token's representation vectors from Bert's last hidden state vectors. We then calculate the similarity between the two sentence words. The attention weight is computed by",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 67,
"text": "Fig. 1 and Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model Description",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Att ij =\u0101 T ib j .",
"eq_num": "(2)"
}
],
"section": "Model Description",
"sec_num": "3.2"
},
{
"text": "After that, we compute the weighted summation of\u00e3 i andb i b\u1ef9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a i = l b j=1 exp(Att ij ) l b k=1 exp(e ik )b j , i \u2208 [1, ..., l a ],",
"eq_num": "(3)"
}
],
"section": "Model Description",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "b i = la j=1 exp(Att ij ) la k=1 exp(e ik )\u0101 j , i \u2208 [1, ..., l b ].",
"eq_num": "(4)"
}
],
"section": "Model Description",
"sec_num": "3.2"
},
{
"text": "Figure 4: The model of EI-Bert",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "3.2"
},
{
"text": "The content in {b j } l b j=1 will be selected and represented as\u00e3 i . Similarly, the content in {\u0101 j } la j=1 will be selected and represented asb i . We call this process Local Inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "3.2"
},
{
"text": "We obtain\u00e3 i andb i from local inference. After that, we stack the token vectors\u00e3 i to form the sentence vectors\u00e3. Similarly, we apply the same stack operation to generate sentence vectors of\u0101,b, andb, respectively. We conduct information enhancement to calculate the difference and element-wise product of\u00e3 and\u0101, respectively. We then concatenate all the information together. The formula is as follows: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m a = [\u0101;\u00e3;\u0101 \u2212\u00e3;\u0101 \u00e3],",
"eq_num": "(5)"
}
],
"section": "Model Description",
"sec_num": "3.2"
},
{
"text": "where\u00e3 represents\u0101 's weighted sum,\u00e3 \u2212\u0101 is the difference set of\u00e3 and\u0101, is the dot product of\u00e3 and\u0101. m a and m b represent the enhanced vectors of\u0101 andb respectively. However, different sentences have different number of tokens. The dimensions of the sentence vectors vary. We concatenate the two sentences' representation vectors. We then apply Max-Pooling before Softmax which yields the prediction of the label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "3.2"
},
{
"text": "The official dataset consists of 13,694 labeled data in training, development, and test. The additional training dataset and the development dataset are released afterwards while, the test dataset consisting of 2,960 sentences. In order to make the development set with nearly the same size of the test set, we conduct 5-fold cross-validation. For each fold, we perform data augmentation on the training dataset, the development dataset remaining unchanged. Similarly, we conduct data augmentation with 5-fold crossvalidation again after adding Pseudo-Label, plus three full-volume data (without division development dataset) as the training dataset, we get 13 datasets in the end. The data pre-processing stage includes abbreviation reduction, spelling correction, word stem extraction, upper and lower case letter conversion, special symbol processing and other operations, at the same time, the data with the same training sample but inconsistent labels after the conversion of upper and lower case letters are screened out. Table 1 reports the hyperparameters of each model. The advantage of the RoBerta pre-trained model is that the data during pre-training includes about 38 GB of data extracted from Reddit, because the training dataset collected original news headlines from news media posted on Reddit. During the fine-tuning process, with the increase of the number of iterations, the loss of the default learning rate is difficult to decrease, and the model is difficult to converge. The situation can be improved by reducing the learning rate and increasing the number of iterations. Table 2 reports the accuracy of RoBerta with different inputs. We can see that: (1) The Input3 of the model contains clustering information of replacement words, and its performances is better than the Input2 and Input1; (2) The improved EI-RoBerta has better performances than RoBerta due to further interactive learning of feature information. In this task, we also compare the performance of a single-sentence classification and sentence pair classification by adopting Bert as the pre-trained model. sentence pair classification outperforms singlesentence classification. This makes sense due to the following two reasons: (1) Bert has absorbed the knowledge of inter-sentence relationships due to its next sentence prediction mechanism, which can help the downstream task with required inter sentence relationship; (2) We utilize Transformer, which places self-attention mechanism on sentence pair beneficial for fine tuning.",
"cite_spans": [],
"ref_spans": [
{
"start": 1028,
"end": 1035,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1596,
"end": 1604,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "In Subtask 2, we achieve the final results of 0.64460 accuracy and 0.25411 reward score, which was ranked the fifth in the test phrases. In Table 3 , we show the top 3 results from the three pre-trained model. It is shown that 5-cv-data indicates the prediction results of the model after 5-fold cross-validation but without data augmentation, 5-cv-data-augmentation indicates the development dataset remains unchanged and the training dataset with data augmentation, and the last column indicates the results after adding Pseudo-Label data with 5 epochs based on data augmentation. From the table we can see that under the premise of 5-fold cross-validation of different strategies, data augmentation and Pseudo-Label are conducive to model performance improvement while RoBerta attaining the best performance among the pre-trained models. In the end, we select 21 models whose development set score was greater than 0.64 to predict the logits predicted on the test set for weighted summation. The weight coefficient ratio of Bert, XLNET and RoBerta is 3:4:3. In addition, the predicted results screen out 13 groups of contradictions (funniness A>B,A<C,B>C or A<B,A>C,B<C) through the discovery, because the system calculates the final evaluation results ignore the predicted results of two edited headlines with the same funniness sentences, so also filtered out 18 data predicted to be label 0, using individual model results to calculate the plurality for label correction.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 147,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "This paper presents a method to detect humor in edited news headlines of the Enhancement Inference Bert model based on auxiliary sentences. We first creatively make a variety of differently auxiliary sentence pair inputs, and then use the Bert pre-trained model to encode the sentence pair. Secondly, we perform the enhancement of local inference information on the sentence pair features. while utilizing we make data augmentation and Pseudo-Label. Finally, we carry out a multi-model weighted ensemble strategy to enhance the model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "In the future, we will exploit multi-task learning to simultaneously learn Subtask 1, a regression task, and Subtask 2, a classification task. By sharing the information among the tasks, we can gain more knowledge and apply a more precise data representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "https://competitions.codalab.org/competitions/20970#results",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic detection of irony and humour in twitter",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 5th International Conference on Computational Creativity",
"volume": "",
"issue": "",
"pages": "2637--2642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesco Barbieri, Horacio Saggion. 2014. Automatic detection of irony and humour in twitter, Pages 2637-2642. Proceedings of the 5th International Conference on Computational Creativity.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enhanced LSTM for Natural Language Inference",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Si",
"middle": [
"Wei Hui"
],
"last": "Jiang",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei. Hui Jiang, Diana Inkpen. 2017. Enhanced LSTM for Natural Language Inference. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions. Association for Computational Linguistics",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, Kristina Toutanova. 2019. BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Semi-supervised recognition of sarcastic sentences in twitter and amazon",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Davidov",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Tsur",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "107--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Davidov, Oren Tsur, Ari Rappoport. 2010. Semi-supervised recognition of sarcastic sentences in twitter and amazon, Pages 107-116. Proceedings of the Fourteenth Conference on Computational Natural Language Learning.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Filling the Blanks (hint: plural noun) for Mad Libs Humor",
"authors": [
{
"first": "Nabil",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Krumm",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Erichorvitz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kautz",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "638--647",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nabil Hossain, John Krumm, Lucy Vanderwende, EricHorvitz, Henry Kautz. 2017. Filling the Blanks (hint: plural noun) for Mad Libs Humor, Pages 638-647. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "SemEval-2020 Task 7: Assessing Humor in Edited News Headlines",
"authors": [
{
"first": "Nabil",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Krumm",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Kautz",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nabil Hossain, John Krumm, Michael Gamon and Henry Kautz. 2020. SemEval-2020 Task 7: Assessing Humor in Edited News Headlines. In Proceedings of International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "President Vows to Cut <Taxes> Hair\": Dataset and Analysis of Creative Text Editing for Humorous Headlines",
"authors": [
{
"first": "Nabil",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Krumm",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "1",
"issue": "",
"pages": "133--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nabil Hossain, John Krumm, Michael Gamon. 2019. \"President Vows to Cut <Taxes> Hair\": Dataset and Analy- sis of Creative Text Editing for Humorous Headlines, Volume 1, Pages 133-142. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A large self-annotated corpus for sarcasm. Language Resources and Evaluation Conference",
"authors": [
{
"first": "Mikhail",
"middle": [],
"last": "Khodak",
"suffix": ""
},
{
"first": "Nikunj",
"middle": [],
"last": "Saunshi",
"suffix": ""
},
{
"first": "Kiran",
"middle": [],
"last": "Vodrahalli",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikhail Khodak, Nikunj Saunshi and Kiran Vodrahalli. 2018. A large self-annotated corpus for sarcasm. Lan- guage Resources and Evaluation Conference.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Pseudo-Label: The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks",
"authors": [
{
"first": "Dong-Hyun",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2013,
"venue": "ICML 2013 Workshop : Challenges in Representation Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong-Hyun Lee. 2013. Pseudo-Label: The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks. ICML 2013 Workshop : Challenges in Representation Learning.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettle- moyer, Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Unsupervised joke generation from big data, Pages 228-232. Association for Computational Linguistics",
"authors": [
{
"first": "David",
"middle": [],
"last": "Matthews",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Matthews. 2013. Unsupervised joke generation from big data, Pages 228-232. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Parallelizing and optimizing overlapping community detection with speaker-listener Label Propagation Algorithm on multi-core architecture, Pages 439-443",
"authors": [
{
"first": "Yuchen",
"middle": [],
"last": "Qiao",
"suffix": ""
},
{
"first": "Haixia",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dongsheng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE 2nd International Conference on Cloud Computing and Big Data Analysis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuchen Qiao, Haixia Wang, Dongsheng Wang. 2017. Parallelizing and optimizing overlapping community detec- tion with speaker-listener Label Propagation Algorithm on multi-core architecture, Pages 439-443. 2017 IEEE 2nd International Conference on Cloud Computing and Big Data Analysis.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Understanding the Behaviors of BERT in Ranking",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Qiao",
"suffix": ""
},
{
"first": "Chenyan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Zhenghao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yifan Qiao, Chenyan Xiong, Zhenghao Liu, Zhiyuan Liu. 2019. Understanding the Behaviors of BERT in Ranking. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Unifying Question Answering, Text Classification, and Regression via Span Extraction",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Nitish Shirish Keskar",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, Richard Socher. 2019. Unifying Question Answering, Text Classification, and Regression via Span Extraction. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Utilizing Bert for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence",
"authors": [
{
"first": "Chi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Luyao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "1",
"issue": "",
"pages": "380--385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi Sun, Luyao Huang, Xipeng Qiu. 2019. Utilizing Bert for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence, Volume 1, Pages 380-385. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 2019. XLNet: Gen- eralized Autoregressive Pretraining for Language Understanding. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "apply Bert to optimize fine-grained emotion classification and compare the experimental results of single sentence classification and sentence pair classification based on Bert fine-tuning, analyze the advantages of sentence pair classification, and verify the validity of conversion method. Keskar et al.(2019) propose to unify the QA problem and the text classification problem into a reading comprehension problem with the help of auxiliary data to enhance the model performance, and demonstrate that Bert is more suitable to handle Natural Language Inference (NLI) dataset. Clark et al.(2019) also show that the Bert model can learn more semantic information from inferential data. Qiao et al.(2019) concludes that if Bert is only adopted as a feature expression tool, the input side of Bert is just to enter Question or Passage separately, and take out the [CLS] mark of Bert high-level as the semantic representation of Question or Passage, this method is far less effective than entering Question and Passage on the Bert side at the same time which let Transformer do the matching process of Question and Passage.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Data augmentation with certain threshold",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Pseudo-Label with labeled and unlabeled data simultaneously",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "b = [b;b;b \u2212b;b b ].",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"content": "<table><tr><td colspan=\"4\">Pre-trained Model Bert RoBerta XLNET</td></tr><tr><td>learning rate</td><td>2e-5</td><td>5e-6</td><td>5e-6</td></tr><tr><td>num train epochs</td><td>5</td><td>30</td><td>30</td></tr><tr><td>batch size</td><td>32</td><td>32</td><td>32</td></tr><tr><td>max seq length</td><td>80</td><td>80</td><td>80</td></tr><tr><td colspan=\"2\">warmup proportion 0.05</td><td>0.1</td><td>0.1</td></tr><tr><td>max-pooling</td><td>64</td><td>128</td><td>128</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Parameter settings for different pre-trained models"
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>Strategy</td><td>ACC</td></tr><tr><td>RoBerta with Input1</td><td>0.5626</td></tr><tr><td>RoBerta with Input2</td><td>0.5817</td></tr><tr><td>RoBerta with Input3</td><td>0.5976</td></tr><tr><td colspan=\"2\">EI-RoBerta with Input3 0.6186</td></tr></table>",
"html": null,
"type_str": "table",
"text": "The results of the accuracy of different inputs of models"
},
"TABREF2": {
"num": null,
"content": "<table><tr><td/><td/><td>: Comparison of different strategies</td><td/></tr><tr><td>Models</td><td colspan=\"3\">5-cv-data 5-cv-data-augmentation 5-cv-data-augmentation+Pseudo-Label</td></tr><tr><td>Bert(ACC)</td><td>0.6386</td><td>0.6398</td><td>0.6409</td></tr><tr><td>EI-Bert(ACC)</td><td>0.6424</td><td>0.6444</td><td>0.6434</td></tr><tr><td>EI-XLNET(ACC)</td><td>0.6451</td><td>0.6487</td><td>0.6490</td></tr><tr><td>EI-RoBerta(ACC)</td><td>0.6438</td><td>0.6470</td><td>0.6461</td></tr></table>",
"html": null,
"type_str": "table",
"text": ""
}
}
}
}