ACL-OCL / Base_JSON /prefixS /json /semeval /2020.semeval-1.128.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:14:31.360889Z"
},
"title": "Duluth at SemEval-2020 Task 7: Using Surprise as a Key to Unlock Humorous Headlines",
"authors": [
{
"first": "Shuning",
"middle": [],
"last": "Jin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rutgers University",
"location": {}
},
"email": "shuning.jin@rutgers.edu"
},
{
"first": "Yue",
"middle": [],
"last": "Yin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Minnesota Duluth",
"location": {}
},
"email": ""
},
{
"first": "Xiane",
"middle": [],
"last": "Tang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Minnesota Duluth",
"location": {}
},
"email": ""
},
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Minnesota Duluth",
"location": {}
},
"email": "tpederse@d.umn.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We use pretrained transformer-based language models in SemEval-2020 Task 7: Assessing the Funniness of Edited News Headlines. Inspired by the incongruity theory of humor, we use a contrastive approach to capture the surprise in the edited headlines. In the official evaluation, our system gets 0.531 RMSE in Subtask 1, 11 th among 49 submissions. In Subtask 2, our system gets 0.632 accuracy, 9 th among 32 submissions.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We use pretrained transformer-based language models in SemEval-2020 Task 7: Assessing the Funniness of Edited News Headlines. Inspired by the incongruity theory of humor, we use a contrastive approach to capture the surprise in the edited headlines. In the official evaluation, our system gets 0.531 RMSE in Subtask 1, 11 th among 49 submissions. In Subtask 2, our system gets 0.632 accuracy, 9 th among 32 submissions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Humor detection is a challenging problem in natural language processing. SemEval-2020 Task 7 (Hossain et al., 2020a) 1 focuses on detecting humor in English news headlines with micro-edits. Specifically, the edited headlines have one selected word or entity that is replaced by editors, which are then graded by the degree of funniness. Accurate scoring of the funniness from micro-edits can serve as a footstone of humorous text generation (Hossain et al., 2020a) .",
"cite_spans": [
{
"start": 93,
"end": 116,
"text": "(Hossain et al., 2020a)",
"ref_id": "BIBREF6"
},
{
"start": 441,
"end": 464,
"text": "(Hossain et al., 2020a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Inspired by the incongruity theory (Veale, 2004; Morreall, 2016) , we believe that contrast and surprise is a key ingredient of humor. We instantiate this intuition with a contrastive framework. We then systematically compare three widely used models: CBOW, BERT (Devlin et al., 2019) , and RoBERTa , providing a benchmark for this task. Our best system, based on RoBERTa, achieves compelling performance for both subtasks. Our code is available on GitHub. 2",
"cite_spans": [
{
"start": 35,
"end": 48,
"text": "(Veale, 2004;",
"ref_id": "BIBREF19"
},
{
"start": 49,
"end": 64,
"text": "Morreall, 2016)",
"ref_id": "BIBREF12"
},
{
"start": 263,
"end": 284,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Early humor recognition systems are mostly based on traditional machine learning methods, such as support vector machine, decision tree, Naive Bayes, and k-nearest neighbors (Castro et al., 2016) . Besides, an n-gram language model shows good performance (Yan and Pedersen, 2017) in learning a sense of humor from tweets. Yet n-gram models are limited to a small number of context words.",
"cite_spans": [
{
"start": 174,
"end": 195,
"text": "(Castro et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 255,
"end": 279,
"text": "(Yan and Pedersen, 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Pretrained language models based on Transformer (Vaswani et al., 2017) can obtain contextual information of a whole sentence. Among this family, BERT has been used to assess the humor in tweets and jokes (Mao and Liu, 2019; Weller and Seppi, 2019) . Enlightened by these recent advances, we use BERT to judge the funniness of edited news headlines. We additionally experiment with RoBERTa, a robustly optimized variant of BERT.",
"cite_spans": [
{
"start": 48,
"end": 70,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 204,
"end": 223,
"text": "(Mao and Liu, 2019;",
"ref_id": "BIBREF11"
},
{
"start": 224,
"end": 247,
"text": "Weller and Seppi, 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Lastly, several works also attempt to explicitly model incongruity and surprise of humourous text, focusing on homophonic puns. Kao et al. (2016) formalizes incongruity as a mixed effect of ambiguity and distinctiveness, quantified by entropy and Kullback-Leibler divergence. He et al. (2019) proposes a local-global surprisal measure based on the log-likelihood ratio, to assess whether a sentence is a pun. However, we focus on a broader definition of humor, and formulate incongruity as an input pair to a dual encoder framework. Table 1 : Examples in train data of both subtasks. Underlined words in original headlines are to be substituted by the edit words.",
"cite_spans": [
{
"start": 128,
"end": 145,
"text": "Kao et al. (2016)",
"ref_id": "BIBREF8"
},
{
"start": 276,
"end": 292,
"text": "He et al. (2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 533,
"end": 540,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The Humicroedit dataset (Hossain et al., 2019) provides the training, development, and test data for this task. We also use additional training data from the FunLines dataset (Hossain et al., 2020b) . The dataset statistics are summarized in Appendix A. In Subtask 1, the goal is predicting the funniness score of an edited headline. The score z ranges from 0 to 3, where 0 means not funny and 3 means very funny. In Subtask 2, the goal is to predict the funnier between an edited sentence pair. For labels y \u2208 {0, 1, 2}, 0 implies two headlines are equally funny, 1 implies the first is the funnier, and 2 implies the second is the funnier. Examples of the two subtasks are in Table 1 .",
"cite_spans": [
{
"start": 24,
"end": 46,
"text": "(Hossain et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 175,
"end": 198,
"text": "(Hossain et al., 2020b)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 678,
"end": 685,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task Data",
"sec_num": "3"
},
{
"text": "What are the important characteristics of humor? The incongruity theory, a dominant theory of humor, states that \"it is the perception of something incongruous-something that violates our mental patterns and expectations\" (Morreall, 2016) . Therefore, we hypothesize an edited headline is funny if the edit words are semantically distant from the context words or the original words. This can be exemplified by the first two examples in Table 1 . We start by looking at Headline 33210 (the MONKEY EXAMPLE), also shown below. The context sentence is extracted from the edit sentence by replacing the edit words with a single [MASK] token, motivated by masked language models. Humans are likely to predict a place or a character for the masked token, while the edit token is \"monkeys\".",
"cite_spans": [
{
"start": 222,
"end": 238,
"text": "(Morreall, 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 437,
"end": 444,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "Original CALIFORNIA 1 AND 2 PRESIDENT i=3 TRUMP j=4 ARE 5 GOING 6 TO 7 WAR 8 T o = 8 Edit CALIFORNIA 1 AND 2 MONKEYS i=3, k=3 ARE 4 GOING 5 TO 6 WAR 7 T e = 7 Context CALIFORNIA 1 AND 2 [MASK] i=3 ARE 4 GOING 5 TO 6 WAR 7 T c = 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "Similarly for Headline 1664, given the context WHAT IF [MASK] HAD AS MUCH INFLUENCE AS ECONOMISTS, humans might fill in occupation-related words like \"scientist\" or \"sociologist\" (as in the original headline). However, the edit word is \"donkeys\", which is a surprising prediction and is considered very funny (scored 2.8 out of 3). In this section, we describe a concrete architecture that models the strength of contrast and surprise, which translates into the funniness score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "Let x = (x 1 . . . x To ),x = (x 1 . . .x Te ), x = (x 1 . . . x Tc )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Representation",
"sec_num": "4.1"
},
{
"text": "denote the original, edit, context token sequences. A pretrained word embedding or pretrained encoder maps the tokens into vector sequences e 1:To ,\u1ebd 1:Te , e 1:Tc . The goal is to encode edit sentence, original sentence, context sentence into fixed-length vector representations u, v , v \u2208 R d . Importantly, we use span (a.k.a. sub-sentence) representation rather than whole sentences, which corresponds to the underlined ranges in the above MONKEY EXAMPLE. CBOW We first explore context-independent word representations. We use pretrained GloVe (Pennington et al., 2014) vectors with d = 300 and a vocabulary of 2.2 million words. We use word averaging to We max pool all context words to extract the most salient features. The context vector is v = MaxPool(e 1 . . . e i\u22121 , e i+1 . . . e Tc ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Representation",
"sec_num": "4.1"
},
{
"text": "Transformer We use pretrained transformer-based language models to obtain contextual word representations. The architecture is shown in Figure 1 , a Siamese network (Bromley et al., 1994) where the two encoders have identical structures and shared parameters. 3 With the self-attention mechanism, each word attends to all other words in the sentence and aggregates contextual information. The edit and original vectors are obtained by averaging:",
"cite_spans": [
{
"start": 165,
"end": 187,
"text": "(Bromley et al., 1994)",
"ref_id": "BIBREF0"
},
{
"start": 260,
"end": 261,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 136,
"end": 144,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Span Representation",
"sec_num": "4.1"
},
{
"text": "u = MeanPool(\u1ebd i . . .\u1ebd k ), v = MeanPool(e i . . . e j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Representation",
"sec_num": "4.1"
},
{
"text": "The context vector is simply the contextual embedding of the masked token: v = e i . We experiment with BERT and RoBERTa, using the PyTorch (Paszke et al., 2019) implementation from HuggingFace Transformers library (Wolf et al., 2019) . 4 We use bert-base-uncased (L = 12, d = 768, lower-cased) and roberta-base (L = 12, d = 768).",
"cite_spans": [
{
"start": 140,
"end": 161,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 215,
"end": 234,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 237,
"end": 238,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Span Representation",
"sec_num": "4.1"
},
{
"text": "Transfer Paradigm When using those pretrained word representations, we consider two transfer paradigms: finetuning (FINETUNE) and not finetuning (FREEZE). In the case of FREEZE, we use fixed word embedding directly as the feature for CBOW. For transformers, we use a weighted average of hidden layers from the frozen encoder, with trainable mixing scalars. This approach follows ELMo (Peters et al., 2018) and the edge probing model (Tenney et al., 2019) . Specifically, the final aggregated embedding for i th position is e i = \u03b3 L l=0 \u03b1 l e l,i , where \u03b3 is a scaling factor, \u03b1 l is the weight of the l th layer, L is the total number of layers, and l = 0 corresponds to the embedding layer.",
"cite_spans": [
{
"start": 384,
"end": 405,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 433,
"end": 454,
"text": "(Tenney et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Span Representation",
"sec_num": "4.1"
},
{
"text": "Regression As mentioned at the beginning of the section, contrast and surprise is the key to humor. To represent the pairwise relationship between two vectors, we derive feature from h = f (x, y) = [x; y; |x \u2212 y| ; x * y] \u2208 R 4d , where ; denotes concatenation and * denotes element-wise multiplication. This feature has been used as the input to the classifier in the sentence pair tasks of SentEval (Conneau and Kiela, 2018) . To formulate the contrast pair, we either use edit sentence and its context f (u, v), or edit sentence and original sentence f (u, v ). We denote the two scenarios as CONTEXT and ORIGINAL respectively. Finally, we use a classifier to predict the funniness score of the edited headline.\u1e91 = Classifier(h) \u2208 R. The classifier is a two-layer MLP with 256 hidden dimensions. When finetuning transformers we use single-layer linear projection instead, since its large number of parameters have already given us sufficient flexibility. The optimization objective is mean squared error L = z \u2212\u1e91 2 . Classification In Subtask 2, we use the same method to predict the scores of two edited versions\u1e91 (1) and\u1e91 (2) . By comparing the scores, the funnier version is found during evaluation and testing time: y = arg max i\u2208{1,2}\u1e91 (i) . The loss function is L = z (1) \u2212\u1e91 (1) 2 + z (2) \u2212\u1e91 (2) 2 .",
"cite_spans": [
{
"start": 401,
"end": 426,
"text": "(Conneau and Kiela, 2018)",
"ref_id": "BIBREF2"
},
{
"start": 1127,
"end": 1130,
"text": "(2)",
"ref_id": null
},
{
"start": 1244,
"end": 1247,
"text": "(i)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Specific",
"sec_num": "4.2"
},
{
"text": "For Subtask 1, the primary metric for official ranking is Root Mean Squared Error (RMSE). In addition, we calculate Spearman's rank correlation coefficient which measures the monotonic relationship between predicted scores and true scores. In the evaluation of Subtask 2, instances with label 0 are ignored. The primary metric for official ranking is accuracy. As an auxiliary metric, reward takes pairwise score differences into account:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "5.1"
},
{
"text": "r = 1 N N i=1 (1\u0177 i =y i \u2212 1\u0177 i =y i )|z (1) i \u2212 z (2) i |,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "5.1"
},
{
"text": "where y i and\u0177 i are true labels and predicted labels respectively, and z i are true scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "5.1"
},
{
"text": "For the official evaluation, our submitted system is RoBERTa-FREEZE-CONTEXT. We use Adam (Kingma and Ba, 2015) optimizer with a learning rate of 1e-3 and use the 10 th epoch. Our system gets 0.531 RMSE for Subtask 1 (11 th among 49 submissions) and 0.632 accuracy for Subtask 2 (9 th among 32 submissions) Table 3 : Comparison between non-contrastive and contrastive approaches, based on RoBERTa. CONTEXT and ORIGINAL are contrastive, using the edit-context pair and the edit-original pair as input respectively. EDIT is non-contrastive, using the edit sentence only. The results are from test sets of the two subtasks. We tune other hyperparameters on the validation sets and select the best model for each cell.",
"cite_spans": [],
"ref_spans": [
{
"start": 306,
"end": 313,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Official Evaluation",
"sec_num": "5.2"
},
{
"text": "on the test set. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Official Evaluation",
"sec_num": "5.2"
},
{
"text": "In the post-evaluation phase, we conduct a more extensive search on hyperparameters and select the best models based on validation performance. Experiment details and hyperparameters are in Appendix B. We systematically compare CBOW, BERT, and RoBERTa, and perform an ablation study to understand the effects of various factors: extra training data, finetuning or freezing the pretrained embeddings, and using CONTEXT or ORIGINAL feature. The post-evaluation results on the test set are in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 490,
"end": 497,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Post-Evaluation",
"sec_num": "5.3"
},
{
"text": "Contextual Representation Despite its simplicity, CBOW is surprisingly effective. Its best result is significantly better than the baseline and is comparable to Subtask 1 #19 (0.547) and Subtask 2 #17 (0.605) on the leaderboard. By comparing the three models, we see that pretrained language models have better performance than context-independent word embedding. While results for BERT and RoBERTa are similar, both of them outperform CBOW, evidencing that contextual information is essential for humor detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-Evaluation",
"sec_num": "5.3"
},
{
"text": "CONTEXT vs. ORIGINAL In the ablation study, we first notice that neither finetuning nor using extra data from the FunLines dataset make much difference for all models. Interestingly, using different contrast pairs as the feature has different effects on models. CONTEXT is better than ORIGINAL for CBOW, yet they are similar for pretrained language models. Why does this happen? CBOW with ORIGINAL only uses the information of edited word and original word while completely neglecting the contextual relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-Evaluation",
"sec_num": "5.3"
},
{
"text": "Pairing CBOW with CONTEXT can alleviate this limitation. On the other hand, pretrained language models exploit the contextual relation between edit words and context words in both cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-Evaluation",
"sec_num": "5.3"
},
{
"text": "6 Analysis and Discussion",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-Evaluation",
"sec_num": "5.3"
},
{
"text": "In our main experiment, we focus on the contrastive approach using a sentence pair (i.e., CONTEXT and ORIGINAL) and show its effectiveness. The remaining question is, can we predict humor using the edited sentence as the only input? Thus, we investigate a non-contrastive approach, with a single encoder to obtain the span representation of the edit sentence. We refer to this as EDIT. This is equivalent to only using the left part in Figure 1 . We conduct an experiment with RoBERTa on both subtasks. The results are in Table 3 . We see that EDIT has similar performance as contrastive approaches. We conjecture that EDIT captures contrast implicitly, while CONTEXT and ORIGINAL capture contrast explicitly by design.",
"cite_spans": [],
"ref_spans": [
{
"start": 436,
"end": 444,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 522,
"end": 529,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Non-contrastive Approach",
"sec_num": "6.1"
},
{
"text": "To understand the relationship between human judgment and model predictions, we calculate the correlation matrix between true funniness scores and predicted scores from RoBERTa with different features (CONTEXT, ORIGINAL, and EDIT) for Subtask 1. From Table 4 , we see that the models correlate poorly with human judgment (correlations \u2248 0.4), while correlating well with each other (correlation \u2248 0.8). To further learn when the models make an erroneous judgment, we look at model predictions on the test set of Subtask 1. We see the models generally capture the incongruity phenomenon. While being key to many examples, incongruity does not account for others. We summarize some typical examples in Table 5: \u2022 For Headline 9100, the edit word \"children\" is incongruous with the context of national security.",
"cite_spans": [],
"ref_spans": [
{
"start": 251,
"end": 258,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 700,
"end": 708,
"text": "Table 5:",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.2"
},
{
"text": "While humans consider it not funny at all, models assign a high funniness score. The fallacy is that incongruity is not a sufficient condition for humor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.2"
},
{
"text": "\u2022 In other cases, the edit words are congruous with the context. While humans consider them very funny, models predict the opposite. That is, incongruity is not a necessary condition for humor. Humor has diverse underlying causes. For instance, Headline 12685 shows sarcasm, taunting Trump's lack of geography knowledge and common sense. Headline 12271 uses pun based on polysemy: \"turkey\" can either mean a country (when capitalized) or a bird. Also, humor can require an understanding of cultural commentary, exemplified by Headline 9406. Since the Cheesecake Factory is a large chain of restaurants that some may look down upon, they are happy to see it blown up with \"no complaints\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.2"
},
{
"text": "We use incongruity as the key to assessing funniness in edited news headlines. Specifically, we use pretrained transformer-based language models to encode contrastive pairs. Our best performing model is RoBERTa, which is submitted for the official evaluation and achieves competitive performance in both subtasks. The additional experiment shows that a non-contrastive approach may also encode incongruity implicitly. While incongruity is a common ingredient of humor, error analysis indicates it is neither sufficient nor necessary. This invites future research to take other factors (e.g., sarcasm, pun, or world knowledge) into account to better tackle humor, an intricate phenomenon rooted in human creativity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "An alternative architecture is to use a single transformer to encode the concatenation of a sentence pair, which enforces crosssentence attention. However, since the paired sentences here are almost identical, cross-sentence attention seems unnecessary.4 HuggingFace Transformers library: https://github.com/huggingface/transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Task leaderboard: https://competitions.codalab.org/competitions/20970#results. \"Evaluation-Task-1\" is for Subtask 1 and \"Evaluation-Task-2\" is for Subtask 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank Karl Stratos for his insightful feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "Preprocessing We use spaCy word tokenizer for CBOW. The pretrained transformers use byte-pair encoding (Sennrich et al., 2016, BPE) to convert text into subword units. BERT uses WordPiece (Wu et al., 2016 ) tokenization, a character-level BPE, with a vocabulary size of 30K. RoBERTa preserves cases and uses a byte-level BPE with a vocabulary size of 50K.Training For training, we use a batch size of 32 in Subtask 1 and 16 in Subtask 2. We use Adam optimizer and perform gradient clipping with a max 2 norm of 5. For most experiments, we train for 10 epochs with a learning rate in {1e-3, 3e-4}. However, when finetuning transformers, we choose max epochs in {3, 10}, and use either a constant learning rate or a linear decreasing schedule with an initial learning rate in {2e-5, 5e-5}. We perform validation on the development set every 1/3 epoch and save the best checkpoint.",
"cite_spans": [
{
"start": 103,
"end": 131,
"text": "(Sennrich et al., 2016, BPE)",
"ref_id": null
},
{
"start": 188,
"end": 204,
"text": "(Wu et al., 2016",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B Experiment Details",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Signature verification using a \"Siamese\" time delay neural network",
"authors": [
{
"first": "Jane",
"middle": [],
"last": "Bromley",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Guyon",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "S\u00e4ckinger",
"suffix": ""
},
{
"first": "Roopak",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 1994,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "737--744",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard S\u00e4ckinger, and Roopak Shah. 1994. Signature verification using a \"Siamese\" time delay neural network. In Advances in Neural Information Processing Systems, pages 737-744.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Is this a joke? Detecting humor in Spanish tweets",
"authors": [
{
"first": "Santiago",
"middle": [],
"last": "Castro",
"suffix": ""
},
{
"first": "Mat\u00edas",
"middle": [],
"last": "Cubero",
"suffix": ""
},
{
"first": "Diego",
"middle": [],
"last": "Garat",
"suffix": ""
},
{
"first": "Guillermo",
"middle": [],
"last": "Moncecchi",
"suffix": ""
}
],
"year": 2016,
"venue": "Ibero-American Conference on Artificial Intelligence (IBERAMIA 2016)",
"volume": "",
"issue": "",
"pages": "139--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Santiago Castro, Mat\u00edas Cubero, Diego Garat, and Guillermo Moncecchi. 2016. Is this a joke? Detecting humor in Spanish tweets. In Ibero-American Conference on Artificial Intelligence (IBERAMIA 2016), pages 139-150. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "SentEval: An evaluation toolkit for universal sentence representations",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence representations. In Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Pun generation with surprise",
"authors": [
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1734--1744",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "He He, Nanyun Peng, and Percy Liang. 2019. Pun generation with surprise. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1734-1744.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "president vows to cut <taxes> hair\": Dataset and analysis of creative text editing for humorous headlines",
"authors": [
{
"first": "Nabil",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Krumm",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "133--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nabil Hossain, John Krumm, and Michael Gamon. 2019. \"president vows to cut <taxes> hair\": Dataset and analysis of creative text editing for humorous headlines. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 133-142.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semeval-2020 Task 7: Assessing humor in edited news headlines",
"authors": [
{
"first": "Nabil",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Krumm",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Kautz",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nabil Hossain, John Krumm, Michael Gamon, and Henry Kautz. 2020a. Semeval-2020 Task 7: Assessing humor in edited news headlines. In Proceedings of International Workshop on Semantic Evaluation (SemEval-2020).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Stimulating creativity with FunLines: A case study of humor generation in headlines",
"authors": [
{
"first": "Nabil",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Krumm",
"suffix": ""
},
{
"first": "Tanvir",
"middle": [],
"last": "Sajed",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Kautz",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "256--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nabil Hossain, John Krumm, Tanvir Sajed, and Henry Kautz. 2020b. Stimulating creativity with FunLines: A case study of humor generation in headlines. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 256-262.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A computational model of linguistic humor in puns",
"authors": [
{
"first": "Justine",
"middle": [
"T"
],
"last": "Kao",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"D"
],
"last": "Goodman",
"suffix": ""
}
],
"year": 2016,
"venue": "Cognitive Science",
"volume": "40",
"issue": "5",
"pages": "1270--1285",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justine T. Kao, Roger Levy, and Noah D. Goodman. 2016. A computational model of linguistic humor in puns. Cognitive Science, 40(5):1270-1285.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "RoBERTa: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A BERT-based approach for automatic humor detection and scoring",
"authors": [
{
"first": "Jihang",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Wanli",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Iberian Languages Evaluation Forum",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jihang Mao and Wanli Liu. 2019. A BERT-based approach for automatic humor detection and scoring. In Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2019).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Philosophy of humor",
"authors": [
{
"first": "John",
"middle": [],
"last": "Morreall",
"suffix": ""
}
],
"year": 2016,
"venue": "The Stanford Encyclopedia of Philosophy",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Morreall. 2016. Philosophy of humor. In The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, winter 2016 edition.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "PyTorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "K\u00f6pf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zach",
"middle": [],
"last": "Devito",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "8026--8037",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zem- ing Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas K\u00f6pf, Edward Yang, Zach DeVito, Mar- tin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Informa- tion Processing Systems, pages 8026-8037.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representa- tion. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettle- moyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "What do you learn from context? Probing for sentence structure in contextualized word representations",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Berlin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Najoung",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? Probing for sentence structure in contextualized word representations. In Proceedings of International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Incongruity in humor: Root cause or epiphenomenon? Humor",
"authors": [
{
"first": "Tony",
"middle": [],
"last": "Veale",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "17",
"issue": "",
"pages": "419--428",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tony Veale. 2004. Incongruity in humor: Root cause or epiphenomenon? Humor, 17(4):419-428.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Humor detection: A transformer gets the last laugh",
"authors": [
{
"first": "Orion",
"middle": [],
"last": "Weller",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Seppi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3621--3625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Orion Weller and Kevin Seppi. 2019. Humor detection: A transformer gets the last laugh. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3621-3625.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "HuggingFace's Transformers: State-of-theart natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace's Transformers: State-of-the- art natural language processing. arXiv preprint arXiv:1910.03771.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, \u0141ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Duluth at SemEval-2017 task 6: Language models in humor detection",
"authors": [
{
"first": "Xinru",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "385--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinru Yan and Ted Pedersen. 2017. Duluth at SemEval-2017 task 6: Language models in humor detection. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 385-389.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Denote a span as a tuple of start and end position of contiguous tokens: s = [i, j] for edit,s = [i, k] for original, and s = [i, i] for the [MASK] token."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Transformer architecture to predict the funniness score of an edited headline using edit-context sentence pair. The full edited sentence is \"What if donkeys had as much influence as economists\". \"donkeys\" is the edited word and is tokenized into two subwords in this example. get edit word vector u = MeanPool(\u1ebd i . . .\u1ebd k ) and original word vector v = MeanPool(e i . . . e j )."
},
"TABREF3": {
"content": "<table/>",
"text": "Post-evaluation test results. FREEZE: no finetuning; FT: finetuning; CONTEXT: use context of edit headline; ORIGINAL: use original headline; EXTRA: use additional training data from the FunLines dataset. RMSE \u2020 and Accuracy \u2020 are primary metrics. The best within each model type is bolded, and the overall best is underlined. Gain measures the improvement over CBOW-CONTEXT-FREEZE. In Subtask 1 this is w.r.t. to RMSE and in Subtask 2 w.r.t. accuracy. Baseline 1 uses the average score and Baseline 2 uses the majority label.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF6": {
"content": "<table><tr><td>ID</td><td>Original Headline</td><td>Edit</td><td>Human</td><td/><td>Prediction</td><td/></tr><tr><td/><td/><td/><td/><td colspan=\"3\">ORIGINAL CONTEXT EDIT</td></tr><tr><td>9100</td><td>WSJ: Trump's top national security adviser is being investigated for his communications with Russia</td><td>children</td><td>0.0</td><td>1.35</td><td>1.50</td><td>0.94</td></tr><tr><td>9406</td><td>Man Sets Off Explosive Device at L.A.-Area Cheesecake Factory, No Injuries</td><td colspan=\"2\">complaints 2.4</td><td>0.76</td><td>0.52</td><td>0.72</td></tr><tr><td>12271</td><td>Turkey tells citizens to reconsider travelling to US</td><td>poultry</td><td>2.4</td><td>0.85</td><td>0.83</td><td>0.65</td></tr><tr><td>12685</td><td>CBS Poll: Americans lack confidence in Trump's ability to handle North Korea</td><td>locate</td><td>2.4</td><td>0.97</td><td>0.94</td><td>0.96</td></tr></table>",
"text": "The correlation matrix of human scores and predicted scores from RoBERTa with different features: EDIT, CONTEXT, and ORIGINAL. Lower triangle: Pearson correlation coefficients. Upper triangle: Spearman correlation coefficients. Results are from the test set of Subtask 1.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF7": {
"content": "<table/>",
"text": "Error analysis on the test set of Subtask 1. The examples show disagreements between human ratings and model predictions.",
"type_str": "table",
"num": null,
"html": null
}
}
}
}