ACL-OCL / Base_JSON /prefixS /json /semeval /2020.semeval-1.106.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:19:22.453614Z"
},
"title": "JokeMeter at SemEval-2020 Task 7: Convolutional humor",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Docekal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Brno University of Technology",
"location": {
"postCode": "612 66",
"settlement": "Brno",
"country": "Czech Republic"
}
},
"email": "idocekal@fit.vutbr.cz"
},
{
"first": "Martin",
"middle": [],
"last": "Fajcik",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Brno University of Technology",
"location": {
"postCode": "612 66",
"settlement": "Brno",
"country": "Czech Republic"
}
},
"email": "ifajcik@fit.vutbr.cz"
},
{
"first": "Josef",
"middle": [],
"last": "Jon",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Brno University of Technology",
"location": {
"postCode": "612 66",
"settlement": "Brno",
"country": "Czech Republic"
}
},
"email": "ijon@fit.vutbr.cz"
},
{
"first": "Pavel",
"middle": [],
"last": "Smrz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Brno University of Technology",
"location": {
"postCode": "612 66",
"settlement": "Brno",
"country": "Czech Republic"
}
},
"email": "smrz@fit.vutbr.cz"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our system that was designed for Humor evaluation within the SemEval-2020 Task 7. The system is based on convolutional neural network architecture. We investigate the system on the official dataset, and we provide more insight to model itself to see how the learned inner features look. 1 The inter-annotator agreement measured with Krippendorff's interval metric is just 0.2 (Hossain et al.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our system that was designed for Humor evaluation within the SemEval-2020 Task 7. The system is based on convolutional neural network architecture. We investigate the system on the official dataset, and we provide more insight to model itself to see how the learned inner features look. 1 The inter-annotator agreement measured with Krippendorff's interval metric is just 0.2 (Hossain et al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper deals with estimating the humor of edited English news headlines (Hossain et al., 2020b; Hossain et al., 2020a) . The illustration of tasks is in Figure 1 . The original text sequence is given, which represents a title, with the annotated part that is edited along with the edit itself. Our responsibility is to determine how funny this change is in the range from 0 to 3 (inclusive). This is called Sub-Task 1. We also participate in the Sub-Task 2, in that we should decide which from both given edits is the funnier one. For the second task, we used the approach of reusing the model from the first task as it is described in section 4. So we are focusing the description on Sub-Task 1. Figure 1 : Examples for both tasks. The red color in the original title marks part that is substituted with a green word.",
"cite_spans": [
{
"start": 76,
"end": 99,
"text": "(Hossain et al., 2020b;",
"ref_id": "BIBREF10"
},
{
"start": 100,
"end": 122,
"text": "Hossain et al., 2020a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 157,
"end": 165,
"text": "Figure 1",
"ref_id": null
},
{
"start": 701,
"end": 709,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Official results were achieved with a Convolutional Neural Networks (CNNs) (LeCun et al., 1999; Fukushima and Miyake, 1982 ), but we also tested numerous other approaches such as SVM (Cortes and Vapnik, 1995) and pre-trained transformer model (Vaswani et al., 2017) .",
"cite_spans": [
{
"start": 75,
"end": 95,
"text": "(LeCun et al., 1999;",
"ref_id": "BIBREF14"
},
{
"start": 96,
"end": 122,
"text": "Fukushima and Miyake, 1982",
"ref_id": "BIBREF6"
},
{
"start": 183,
"end": 208,
"text": "(Cortes and Vapnik, 1995)",
"ref_id": "BIBREF4"
},
{
"start": 243,
"end": 265,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The humor is a very subjective phenomenon, as can be seen from the inter-agreement on label annotation in Sub-Task 1 dataset 1 . The given data labels do not allow us to learn a sense of humor of a human annotator because the dataset does not specify from whom the grade comes. So, for example, if we have one annotator that likes dark humor and all the others not, we will be considering such a replacement as not humorous no meter if it is excellent dark humor or not. In other words, we may say that we are searching for some most common kind of humor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The dominant theory of humor is the Incongruity Theory (Morreall, 2016) . It says that we are finding humor in perceiving something unexpected (incongruous) that violates expectations that were set up by the joke. There are samples, in the provided dataset, that uses the incongruity to create humor. Moreover, according to Hossain et al. (2020a) , we can see a positive influence of incongruity on systems results for the dataset. position 1 2 3 4 5 RMSE 1.179 0.583 0.403 0.63 0.903 Table 1 : Results on the Sub-Task 1 train set of the oracle classifier, which always correctly predict the grade on the n-th position. grade 0 1 2 3 RMSE 1.103 0.587 1.214 2.145 Table 2: This table shows results for a classifier that would always predict the same grade for the Sub-Task 1 train set.",
"cite_spans": [
{
"start": 55,
"end": 71,
"text": "(Morreall, 2016)",
"ref_id": "BIBREF17"
},
{
"start": 324,
"end": 346,
"text": "Hossain et al. (2020a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 485,
"end": 492,
"text": "Table 1",
"ref_id": null
},
{
"start": 663,
"end": 688,
"text": "Table 2: This table shows",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Computational humor is usually divided into two groups: recognition and generation. Humor generation focuses on creating the humor itself, so it can be a system that is able to tell a joke.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recognition can be a binary (funny or not) classification task, but in our work, we also want to know how funny a sequence is by rating it with a grade in a given interval. Hossain (2019) , which introduced the dataset used in this work, tried to create models that can classify if a given edited title is funny or not, so they were training just binary classifiers, in contrast with our regression approach.",
"cite_spans": [
{
"start": 173,
"end": 187,
"text": "Hossain (2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Many other works use this binary approach (Cattle and Ma, 2018; Chen and Soo, 2018; Bertero and Fung, 2016; Weller and Seppi, 2019; Purandare and Litman, 2006) . They use various models, such as LSTM (Hochreiter and Schmidhuber, 1997) , CNN, and BERT (Devlin et al., 2019) , to deal with this task.",
"cite_spans": [
{
"start": 42,
"end": 63,
"text": "(Cattle and Ma, 2018;",
"ref_id": "BIBREF2"
},
{
"start": 64,
"end": 83,
"text": "Chen and Soo, 2018;",
"ref_id": "BIBREF3"
},
{
"start": 84,
"end": 107,
"text": "Bertero and Fung, 2016;",
"ref_id": "BIBREF0"
},
{
"start": 108,
"end": 131,
"text": "Weller and Seppi, 2019;",
"ref_id": "BIBREF22"
},
{
"start": 132,
"end": 159,
"text": "Purandare and Litman, 2006)",
"ref_id": "BIBREF20"
},
{
"start": 200,
"end": 234,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF7"
},
{
"start": 251,
"end": 272,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our official results were achieved with a CNN model that is inspired by architecture presented in (Zhang and Wallace, 2015) . Their architecture is compact to its size (one layer CNN), so it provides the advantage of using less computational resources, in contrast with big models like BERT. Even with the small size, they were able to achieve promising results for the sentence classification task. Also, such a small model allows us to gain better insight into what is going on underneath.",
"cite_spans": [
{
"start": 98,
"end": 123,
"text": "(Zhang and Wallace, 2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we would like to point out some interesting facts about the used data. We are focusing on the data for Sub-Task 1 because we used the model trained on the Sub-Task 1 for Sub-Task 2 (more in section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "For each example, the dataset (Hossain et al., 2019) provides annotation in the form of grades (0,1,2 and 3) of humor assessment in sorted descending order and the mean of these grades. In most cases, there are five grades per dataset sample (sometimes more). In our work, we always use the first five grades. As can be seen from graphs in Figure 2 , the dataset is imbalanced, and the high grades are rare. Also, we investigated how the dataset is imbalanced when considering just single n-th grade and omit the others. Though we can say, from the graph on the right side of the very same figure, that there is still imbalance, we can see that mainly the 2. and 3. positions seem to have smaller fluctuance among the number of samples per grade than the other positions. 0.0-0.3 0.3-0.6 0.6-0.9 0.9-1.2 1.2-1.5 1.5-1.8 1.8-2.1 2.1-2.4 2.4-2.7 2.7-3. We also did further analysis to determine prediction quality for the case when we would have an oracle classifier always predicting the grade on the n-th position. The results are in Table 1 . It can be seen that the third position is superior.",
"cite_spans": [
{
"start": 30,
"end": 52,
"text": "(Hossain et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 340,
"end": 348,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1034,
"end": 1041,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Another thing that we decided to investigate is how the RMSE score would look like if we always predict the same grade. The results are in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 146,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "The main inspiration for our model architecture (JokeMeter) comes from Zhang and Wallace's (2015) work. The used model has CNN architecture illustrated in Figure 3 . Firstly the input sequence is assembled. The edit is inserted into the original title after the part that is being edited. Additionally, we add a slash separator, and the whole original/edit location is delimited with two hashtags. In this way, we were able to add input for the model that has complete information about the original and the edited title. We also include tokens to mark the start and the end of a title. The reason behind these tokens is that we want to encode information into an n-gram, whether it is from the beginning/end of a title or not, and possibly make it easier for the model to learn the setup and punchline humor. We tokenized the input by ALBERT (Lan et al., 2019) pre-trained SentencePiece (Kudo and Richardson, 2018) tokenizer. Each token is assigned a 128-dimensional embedding from 30 000 size vocabulary. Right before the convolution, is added zero-padding of size one on both sides of the sequence. Each sequence is 512 tokens long; shorter sequences are padded with unique padding tokens.",
"cite_spans": [
{
"start": 71,
"end": 97,
"text": "Zhang and Wallace's (2015)",
"ref_id": "BIBREF24"
},
{
"start": 843,
"end": 861,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 888,
"end": 915,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 155,
"end": 163,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "System Description",
"sec_num": "4"
},
{
"text": "We used four different convolution filter region sizes 2, 3, 4, and 8. For each size, we had two filters. These filters are followed with LeakyReLu (Maas et al., 2013) activation (negative slope is 0.01). We also experimented with a model (JokeMeterBoosted) that uses 2048 filters for each size, 2048-dimensional embeddings, and does not use the embedding of the edit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "4"
},
{
"text": "In the final part of our model, we do max pooling in order to get one feature per filter (8 in total). We concatenate the features to compose a vector, and then that vector is again concatenated with the edit embedding. The edit embedding is an average of embeddings of all tokens the edit is composed of.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "4"
},
{
"text": "On the very end, we perform linear transformation followed by the softmax to get probabilities for each grade. With that configuration, we would not be doing a regression task, so at the test time we must do one final calculation that will transform these probabilities into a grade from the continuous interval [0,3]: feature 0 1 region size 2 3 4 8 2 3 4 8 \u03c3 0 0.22 0 0.21 0 0.13 0.23 0 r s --0.43 --0.52 --0.6 -0.44 - ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E[G|S = s] = i=3 i=0 p i i .",
"eq_num": "(1)"
}
],
"section": "System Description",
"sec_num": "4"
},
{
"text": "Where the G is a grade, s is a input sequence, and p i is the estimated probability for grade i. In the case of Sub-Task 2, we run the model for both titles separately, and in the end, we made the decision by comparing their estimated grades.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "4"
},
{
"text": "We used two filters for each region size because we expected that the model would be able to train one feature that will signalize funny and one feature that will signalize not funny. Nevertheless, our analysis instead shows, it learns features that just signalizes how much not funny the given n-gram is, as can be seen in Figure 4 . To gain further insight into this property, we calculated, for the Sub-Task 1 train set, Spearman's rank correlation coefficient between mean grade and each feature (after max pooling). The results in Table 3 shows a negative correlation that corresponds to our hypothesis. An interesting finding is that quite a lot of features have zero variance, which means that constant was learned. That leads us to think that these features reflect the fact that we are able to achieve relatively good RMSE with predicting constant (e.g., one as shown in Table 2 ) due to the imbalance in the dataset. ",
"cite_spans": [],
"ref_spans": [
{
"start": 324,
"end": 332,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 536,
"end": 543,
"text": "Table 3",
"ref_id": "TABREF1"
},
{
"start": 880,
"end": 887,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Convolutional features analysis",
"sec_num": "4.1"
},
{
"text": "In this section, we describe the experiments we performed, not just with the model that is described in section 4. Apart from models based on neural networks, we compiled several baselines: Decision Tree Classifier (DTC) (Breiman et al., 1984) , SVM, k-NN, and Naive Bayes Classifier (NBC). Also, we experimented with a model that uses transformer architecture (ALBERT-base-v2). The neural models were implemented with PyTorch (Paszke et al., 2019) . For the ALBERT and the tokenizer, we used Hugging Face (Wolf et al., 2019) implementations, and for non-neural models, the implementations from the scikit-learn (Pedregosa et al., 2011) were used.",
"cite_spans": [
{
"start": 221,
"end": 243,
"text": "(Breiman et al., 1984)",
"ref_id": "BIBREF1"
},
{
"start": 427,
"end": 448,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 506,
"end": 525,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 612,
"end": 636,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We did two kinds of experiments for all models. The first kind uses all five grades for each sample (all-grades training) during the training. Every sample is copied five times, and a grade is assigned to each of them, so we have five samples that have the same content, but each can have a different grade. On the other hand, the second kind of experiment uses only the 3rd grade (3-grade training), which has for oracle classifier the best score (see Table 1 ). Table 4 : Results on the test set for non-neural models. The number of neighbors for k-NN differs among the experiments. For the Sub-Task 1 we use k = 5, and for Sub-Task 2 k = 13 is used.",
"cite_spans": [],
"ref_spans": [
{
"start": 453,
"end": 460,
"text": "Table 1",
"ref_id": null
},
{
"start": 464,
"end": 471,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Baseline Best JM (all) JM (3.) JMB (all) JA (all) JA (3.) Sub- Task Table 5 : Results on the test set for neural models. We also provide results of the baseline and the best model in the competition. Our official results are in JM (all) column. The JM abbreviation means JokeMeter, JMB is JokeMeterBoosted, JA is JokeALBERT, all means that the all-grades training was used and 3. that the 3-grade training was used.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 67,
"text": "Task",
"ref_id": null
},
{
"start": 68,
"end": 75,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "These models deal with classification, meaning the model must decide between 4 grades instead of selecting a value from [0,3]. TF-IDF word features are used for every model. All models are trained on the train set. We show results for two types of experiments. When the original sequence and the edit (the new word we are inserting into the title) are provided and when we only provide the edit word. The results can be seen in Table 4 . These models are not even able to achieve results that can be provided by the simple model that predicts constant (see Table 2 ). Interestingly, comparable results can be achieved with just using the edit word.",
"cite_spans": [],
"ref_spans": [
{
"start": 428,
"end": 435,
"text": "Table 4",
"ref_id": null
},
{
"start": 557,
"end": 564,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Non-neural models",
"sec_num": "5.1"
},
{
"text": "In addition to the model used in our submission (and its version JokeMeterBoosted), we performed experiments with a system using a pre-trained ALBERT model (JokeALBERT). JokeALBERT utilizes contextual embeddings of the whole input sequence from ALBERT and then selects those that belong to the edited word and averages them into one. Finally, the linear transformation with softmax is applied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural models",
"sec_num": "5.2"
},
{
"text": "To all our neural models, we provided input that uses the same format (see section 4). For both models, we use the cross-entropy loss. We used Adam (Kingma and Ba, 2014) with weight decay (Loshchilov and Hutter, 2017) as a optimizer. We stop the training after no improvement in RMSE on the dev set in five subsequent epochs. The results for these models are presented in Table 5 . Results for JokeMeter and JokeALBERT were obtained with batch size 16 and learning rate 1e-5.",
"cite_spans": [
{
"start": 188,
"end": 217,
"text": "(Loshchilov and Hutter, 2017)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 372,
"end": 379,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Neural models",
"sec_num": "5.2"
},
{
"text": "All results for trained models were evaluated with the official scripts. For the Sub-Task 1, we always show the root-mean-square error (RMSE) metric, and for the Sub-Task 2, the accuracy is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and baseline",
"sec_num": "5.3"
},
{
"text": "The baseline system for Sub-Task 1 always predicts the mean funniness grade from the training set (0.936), and for Sub-Task 2, it always predicts the most frequent label in the training set (1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and baseline",
"sec_num": "5.3"
},
{
"text": "The system description was provided, and we compare the achieved results of the official model with several other models, including the baseline and the best team in the competition. Figure 5 : Comparison of grade predictions for multiple models and the truth labels. We can see that JokeMeter and JokeALBERT's predictions are focused on a small interval around the one. JokeMeter, JokeMeterBoosted and JokeALBERT are for all-grades training, and the SVM is for 3-grade training.",
"cite_spans": [],
"ref_spans": [
{
"start": 183,
"end": 191,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In future work, it should be more investigated if imbalanced dataset and small inter-annotator agreement caused that the JokeMeter model was more focused on the prior probabilities of grades and not on the input itself (see Figure 5) .",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 233,
"text": "Figure 5)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In Figure 6 can be seen the influence of the token embedding size on the RMSE. The rest of the JokeMeter model configuration remains the same as described in section 4. ",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 6",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "A.1 Embeddings",
"sec_num": null
},
{
"text": "In Figure 7 can be seen the influence of the used convolutional filters per region size on the RMSE. The rest of the JokeMeter model configuration remains the same as described in section 4. As shown in Figure 8 there is a different relation between token embedding size and RMSE for 2048 convolutional filters per region size than for the default 2 (see Figure 6 ). Figure 8 : Influence of the size of the token embedding on the RMSE for 2048 convolutional filters per region size.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 7",
"ref_id": "FIGREF7"
},
{
"start": 203,
"end": 211,
"text": "Figure 8",
"ref_id": null
},
{
"start": 355,
"end": 363,
"text": "Figure 6",
"ref_id": "FIGREF6"
},
{
"start": 367,
"end": 375,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.2 Convolutional features",
"sec_num": null
},
{
"text": "In this section are presented results for ablation experiments for JokeMeter that uses 2048 convolutional filters per region size and 2048 size of the token embeddings. The rest of the model configuration remains the same as described in section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.3 Ablation experiments",
"sec_num": null
},
{
"text": "The influence of used features summarizes Table 6 . We can see that the usage of edit embedding does not improve results. We used these findings to create a model that is using 2048 convolutional filters per region size, 2048 dimensional token embeddings, and no edit embedding; we call it JokeMeterBoosted. RMSE convolutional features only 0.550260959621652 \u00b1 0.0012 edit embedding only 0.63520130408731 \u00b1 0.0005 convolutional features and edit embedding 0.5505674648279042 \u00b1 0.0008 Table 6 : Results for the ablation experiments on the JokeMeterBoosted.",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 49,
"text": "Table 6",
"ref_id": null
},
{
"start": 484,
"end": 491,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "A.3 Ablation experiments",
"sec_num": null
},
{
"text": "According to Figure 9 we used for the JokeMeterBoosted batch size 64 and learning rate 1E-5. ",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 21,
"text": "Figure 9",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "A.4 Batch size and learning rate analysis for JokeMeterBoosted",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the Czech Ministry of Education, Youth and Sports, subprogram INTERCOST, project code: LTC18054.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "All the results presented in this section are averages from 3 runs. The evaluation is always done on the dev set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Supplemental Material",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A long short-term memory framework for predicting humor in dialogues",
"authors": [
{
"first": "Dario",
"middle": [],
"last": "Bertero",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "130--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dario Bertero and Pascale Fung. 2016. A long short-term memory framework for predicting humor in dialogues. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 130-135, San Diego, California, June. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Classification and Regression Trees. The Wadsworth and Brooks-Cole statistics-probability series",
"authors": [
{
"first": "L",
"middle": [],
"last": "Breiman",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "C",
"middle": [
"J"
],
"last": "Stone",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Olshen",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Breiman, J. Friedman, C.J. Stone, and R.A. Olshen. 1984. Classification and Regression Trees. The Wadsworth and Brooks-Cole statistics-probability series. Taylor & Francis.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Recognizing humour using word associations and humour anchor extraction",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Cattle",
"suffix": ""
},
{
"first": "Xiaojuan",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1849--1858",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Cattle and Xiaojuan Ma. 2018. Recognizing humour using word associations and humour anchor ex- traction. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1849-1858, Santa Fe, New Mexico, USA, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Humor recognition using deep learning",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Von-Wun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Soo",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "113--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng-Yu Chen and Von-Wun Soo. 2018. Humor recognition using deep learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, Volume 2 (Short Papers), pages 113-117, New Orleans, Louisiana, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Support-vector networks",
"authors": [
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Machine learning",
"volume": "20",
"issue": "3",
"pages": "273--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Machine learning, 20(3):273-297.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Neocognitron: A new algorithm for pattern recognition tolerant of deformations and shifts in position",
"authors": [
{
"first": "Kunihiko",
"middle": [],
"last": "Fukushima",
"suffix": ""
},
{
"first": "Sei",
"middle": [],
"last": "Miyake",
"suffix": ""
}
],
"year": 1982,
"venue": "Pattern recognition",
"volume": "15",
"issue": "6",
"pages": "455--469",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kunihiko Fukushima and Sei Miyake. 1982. Neocognitron: A new algorithm for pattern recognition tolerant of deformations and shifts in position. Pattern recognition, 15(6):455-469.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "president vows to cut <taxes> hair\": Dataset and analysis of creative text editing for humorous headlines",
"authors": [
{
"first": "Nabil",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Krumm",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "133--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nabil Hossain, John Krumm, and Michael Gamon. 2019. \"president vows to cut <taxes> hair\": Dataset and analysis of creative text editing for humorous headlines. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 133-142, Minneapolis, Minnesota, June. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semeval-2020 Task 7: Assessing humor in edited news headlines",
"authors": [
{
"first": "Nabil",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Krumm",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Kautz",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nabil Hossain, John Krumm, Michael Gamon, and Henry Kautz. 2020a. Semeval-2020 Task 7: Assessing humor in edited news headlines. In Proceedings of International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Stimulating creativity with funlines: A case study of humor generation in headlines",
"authors": [
{
"first": "Nabil",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Krumm",
"suffix": ""
},
{
"first": "Tanvir",
"middle": [],
"last": "Sajed",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Kautz",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of ACL 2020, System Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nabil Hossain, John Krumm, Tanvir Sajed, and Henry Kautz. 2020b. Stimulating creativity with funlines: A case study of humor generation in headlines. In Proceedings of ACL 2020, System Demonstrations, Seattle, Washington, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Object recognition with gradient-based learning",
"authors": [
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Haffner",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 1999,
"venue": "Shape, contour and grouping in computer vision",
"volume": "",
"issue": "",
"pages": "319--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann LeCun, Patrick Haffner, L\u00e9on Bottou, and Yoshua Bengio. 1999. Object recognition with gradient-based learning. In Shape, contour and grouping in computer vision, pages 319-345. Springer.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Rectifier nonlinearities improve neural network acoustic models",
"authors": [
{
"first": "L",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maas",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Awni",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Hannun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. icml",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. 2013. Rectifier nonlinearities improve neural network acoustic models. In Proc. icml, volume 30, page 3.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Philosophy of humor",
"authors": [
{
"first": "John",
"middle": [],
"last": "Morreall",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Morreall. 2016. Philosophy of humor. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, winter 2016 edition.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zem- ing Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chin- tala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlch\u00e9-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Scikitlearn: Machine learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit- learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Humor: Prosody analysis and automatic recognition for",
"authors": [
{
"first": "Amruta",
"middle": [],
"last": "Purandare",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Litman ; F*r*i*e*n*d*s*",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "208--215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amruta Purandare and Diane Litman. 2006. Humor: Prosody analysis and automatic recognition for F*R*I*E*N*D*S*. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Pro- cessing, pages 208-215, Sydney, Australia, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Humor detection: A transformer gets the last laugh",
"authors": [
{
"first": "Orion",
"middle": [],
"last": "Weller",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Seppi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3621--3625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Orion Weller and Kevin Seppi. 2019. Humor detection: A transformer gets the last laugh. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Con- ference on Natural Language Processing (EMNLP-IJCNLP), pages 3621-3625, Hong Kong, China, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Huggingface's transformers: State-of-theart natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the- art natural language processing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A sensitivity analysis of (and practitioners' guide to) convolutional neural networks for sentence classification",
"authors": [
{
"first": "Ye",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Byron",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ye Zhang and Byron Wallace. 2015. A sensitivity analysis of (and practitioners' guide to) convolutional neural networks for sentence classification.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "In the histogram on the left side, we can see the number of samples in the train set that have a mean grade in a given bin. The bins are left-inclusive. The graph on the right side shows the number of grades of a given type in the whole train set if we always take just grade on given position per sample.",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "Kushner to visit <Mexico/> following latest Trump tirades therapist assembling into token sequence[START] kush ner to visit # mexico / therapist # following latest trump ti rade s[END]",
"uris": null
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"text": "Illustration of our submission model. The S l is a length of sequence that is processed by the convolution filters.",
"uris": null
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"text": "Real word examples of convolved features for trigrams.",
"uris": null
},
"FIGREF6": {
"num": null,
"type_str": "figure",
"text": "Influence of the size of the token embedding on the RMSE.",
"uris": null
},
"FIGREF7": {
"num": null,
"type_str": "figure",
"text": "Influence of the number of filters per region size on the RMSE.",
"uris": null
},
"FIGREF8": {
"num": null,
"type_str": "figure",
"text": "JokeMeterBoosted RMSE for three learning rates depending on the batch size.",
"uris": null
},
"TABREF1": {
"html": null,
"text": "Table of Spearman's rank correlation (n = 38608, p < 0.01) coefficients between mean grade and each feature. Also, standard deviations of features values.",
"content": "<table/>",
"type_str": "table",
"num": null
}
}
}
}