ACL-OCL / Base_JSON /prefixP /json /pam /2020.pam-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:12:19.721794Z"
},
"title": "How does Punctuation Affect Neural Models in Natural Language Inference",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Ek",
"suffix": "",
"affiliation": {
"laboratory": "Centre for Linguistic Theory and Studies in Probability",
"institution": "University of Gothenburg",
"location": {}
},
"email": "adam.ek@gu.se"
},
{
"first": "Jean-Philippe",
"middle": [],
"last": "Bernardy",
"suffix": "",
"affiliation": {
"laboratory": "Centre for Linguistic Theory and Studies in Probability",
"institution": "University of Gothenburg",
"location": {}
},
"email": "jean-philippe.bernardy@gu.se"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Natural Language Inference models have reached almost human-level performance but their generalisation capabilities have not been yet fully characterized. In particular, sensitivity to small changes in the data is a current area of investigation. In this paper, we focus on the effect of punctuation on such models. Our findings can be broadly summarized as follows: (1) irrelevant changes in punctuation are correctly ignored by the recent transformer models (BERT) while older RNN-based models were sensitive to them. (2) All models, both transformers and RNN-based models, are incapable of taking into account small relevant changes in the punctuation.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Natural Language Inference models have reached almost human-level performance but their generalisation capabilities have not been yet fully characterized. In particular, sensitivity to small changes in the data is a current area of investigation. In this paper, we focus on the effect of punctuation on such models. Our findings can be broadly summarized as follows: (1) irrelevant changes in punctuation are correctly ignored by the recent transformer models (BERT) while older RNN-based models were sensitive to them. (2) All models, both transformers and RNN-based models, are incapable of taking into account small relevant changes in the punctuation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years models for Natural Language Inference (NLI) have reached almost human-level performance. These models frame inference as a classification problem, whose input is a premise/hypothesis pair. It has been noted that small changes in the pair, can flip the prediction (Glockner et al., 2018) . In this paper, we explore the effect of punctuation 1 in neural models in natural language inference.",
"cite_spans": [
{
"start": 279,
"end": 302,
"text": "(Glockner et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Small changes in a premise/hypothesis pair are of two kinds. First, the change can be of an irrelevant kind. For example, we can expect that removing a sentence-final stop should not change the relationship between a premise and hypothesis sentence. Second, a textually small change could flip the relationship between hypothesis and premise. For example, adding a negation word is a small textual change that has a lot of semantic content. But it is not only words that can have a large impact on the meaning of a sentence. Commas, for example, may indicate which words belong together and which do not in a list. Ideally, an NLI model should be insensitive to changes of the first kind, but still, properly recognize changes of the second kind.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we test both hypotheses for the case of punctuation. Namely:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 (H1) Deep-learning based classifiers are sensitive to irrelevant punctuation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 (H2) Deep-learning classifiers take relevant punctuation into account correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work is part of the larger question concerning the ability of NLI models to generalize. There are a number of papers that report several problems of generalizability: Glockner et al. (2018) have shown that several NLI models break considerably easily when, instead of tested on the original SNLI (Bowman et al., 2015) test set, they are tested on a test set which is constructed by taking premises from the training set and creating several hypotheses from them by changing at most one word within the premise. Talman and Chatzikyriakidis (2018) show that NLI models break down when one trains in one dataset, but then test on the test set of a similar dataset (e.g. training on MNLI (Williams et al., 2017) and testing on SNLI). Wang et al. (2019) report problems in generalizability when the two pairs are swapped. The idea is that one should expect the same accuracy for contradiction and neutral when the pairs are swapped (neutral remains neutral, and contradiction remains a contradiction 2 ), and a lower accuracy for entailment (given that entailment turns neutral when the pairs are swapped).",
"cite_spans": [
{
"start": 172,
"end": 194,
"text": "Glockner et al. (2018)",
"ref_id": "BIBREF6"
},
{
"start": 301,
"end": 322,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF2"
},
{
"start": 516,
"end": 550,
"text": "Talman and Chatzikyriakidis (2018)",
"ref_id": "BIBREF9"
},
{
"start": 684,
"end": 712,
"text": "MNLI (Williams et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our experiments are performed on the Multi-Genre Natural Language Inference (MNLI) corpus (Williams et al., 2017) (and variants thereof, as described below). MNLI consists of 433k human-written sentence pairs labeled with entailment, contradiction and neutral. MNLI contains sentence pairs from ten distinct genres 3 of both written and spoken English. Only five genres are included in the training set. The development and test sets have been divided into matched and mismatched, where the former includes only sentences from the same genres as the training data and the latter include sentences from the remaining genres not present in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and experiments",
"sec_num": "2"
},
{
"text": "We consider three variants of MNLI:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and experiments",
"sec_num": "2"
},
{
"text": "(orig) This variant is the original MNLI with no changes whatsoever.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and experiments",
"sec_num": "2"
},
{
"text": "(p) To obtain this variant we make punctuation consistent throughout examples by adding full stops at the end of each sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and experiments",
"sec_num": "2"
},
{
"text": "(\u00acp) To obtain this variant we remove all nonalphanumeric characters from each sentence. This also remove special characters that are sometimes not classified as punctuation, such as the dollar sign. However, such characters occur so seldom that they have little influence on the results, either way (see Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 305,
"end": 312,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets and experiments",
"sec_num": "2"
},
{
"text": "Appending a sentence-final stop is in general reasonable, especially for the non-dialogue examples. For the dialogue part of the MNLI dataset, this is unnatural as final stops typically are not expressed in dialogue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and experiments",
"sec_num": "2"
},
{
"text": "To convey an idea of the amount of data that our transformation impact, we show the raw and relative count 4 of punctuation symbols in Table 1 . In total, relative to word-tokens, punctuation symbols account for about 11.5% of the tokens.",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 142,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets and experiments",
"sec_num": "2"
},
{
"text": "We perform two sets of experiments:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "2.1"
},
{
"text": "In the first set, designed to test (H1), we train NLI models for either of the three (orig, p, \u00acp) The first two examples are cases where the commas are used to denote the conjunction of more than one conjunct. Removing the comma between \"my mother\" and \"Anna\" in 2 has a significant effect on counting: what is taken to be two entities in 1, are one in 2. In 3 and 4, we get a different label depending on whether the hypothesis refers to the property \"good\" (E) or the adjectival modification \"good god\" (C). The test set consists of 18 examples which can be seen in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 569,
"end": 576,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "2.1"
},
{
"text": "The experiments are performed using three models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "BiLSTM The simplest model is a bidirectional LSTM that encodes the premise and hypothesis, then applies max pooling. The model then concatenates the premise and hypothesis in the standard fashion (Conneau et al., 2017; Talman et al., 2019) : BERT Our third model is a transformer model, BERT (Devlin et al., 2018) . We use the BERT base model from the transformer library (Wolf et al., 2019) . To train BERT we use a three layer perceptron with Leaky ReLU activations on top of the BERT model and fine-tune. The BERT model process the premise and hypothesis in parallel and there is no need to explicitly combine them as with the previous models. For the classification of a sentence pair, we use the CLS token generated by BERT that contain a summary of the sentences.",
"cite_spans": [
{
"start": 196,
"end": 218,
"text": "(Conneau et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 219,
"end": 239,
"text": "Talman et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 292,
"end": 313,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 372,
"end": 391,
"text": "(Wolf et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "[p; h; p \u2212 h; p * h]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "For each architecture (BERT, HBMP, and BiL-STM) we perform experiments by training four models, two trained and validated on the dataset with punctuation and two models trained and validated on the dataset without punctuation. To asses the effect of our data augmentation we test the model on the other dataset, i.e. a model trained and validated without punctuation is tested on the dataset with punctuation. We measure the performance in terms of accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "For HBMP and the BiLSTM models we use the default hyperparameters reported by Talman et al. (2019) with GloVe (Pennington et al., 2014 ) word embeddings 5 . The BERT model is fine-tuned with the default model hyperparameters. We use the Adam optimizer with a learning rate of 0.00002 and a batch size of 24.",
"cite_spans": [
{
"start": 78,
"end": 98,
"text": "Talman et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 110,
"end": 134,
"text": "(Pennington et al., 2014",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "The results from the first experiment are shown below in Table 2 . The experiment shows the accuracy for the models trained on the MNLI variations with and without punctuation and their accuracy on all variations.",
"cite_spans": [],
"ref_spans": [
{
"start": 57,
"end": 64,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "First experiment set",
"sec_num": "5.1"
},
{
"text": "The results indicate that when the RNN-based models are tested on the same dataset as it is trained on, the results are similar to that of the original model. However, when we test on the opposite dataset the performance drops drastically (about 30 percentage points). We see that the drop in accuracy is about the same for both the matched and mismatched test set. In contrast to the RNNbased models, the transformer model only shows a slight difference in accuracy when presented with test data different from its training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "First experiment set",
"sec_num": "5.1"
},
{
"text": "Full results from the second experiment can be found in Table 4 , a subset of the examples can be found in Table 3 . The experiment shows the predictions by the HBMP and BERT models trained with and without punctuation on our hand-crafted dataset. ",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 63,
"text": "Table 4",
"ref_id": "TABREF8"
},
{
"start": 107,
"end": 114,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Second experiment set",
"sec_num": "5.2"
},
{
"text": "The experiment shows that the BLSTM and HBMP models trained with punctuation drops significantly in accuracy when tested on data without punctuation. This indicates that when removing punctuation the model changes its prediction incorrectly. Most of the removed punctuation does not change the meaning, rather some information irrelevant the the relationship between the two sentences (such as sentence-final stop).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment one analysis",
"sec_num": "5.3"
},
{
"text": "Inspecting the output of the HBMP model we can see that in many cases, removing a sentencefinal stop flips the models' prediction. In example (5) and (6), both the model trained on punctuation and the one without fail to predict that the final stop does not add any meaning. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment one analysis",
"sec_num": "5.3"
},
{
"text": "HBMP p = E HBMP \u00acp = C BERT p =N BERT \u00acp = N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment one analysis",
"sec_num": "5.3"
},
{
"text": "In examples 7and (8) 6, the sentence-final stop has been removed, as well as a comma. In such a case, the comma does not add any meaning but acts as a separator of clauses. The removal or addition of this comma flips the prediction of the models. This shows that irrelevant changes both involving commas and sentence-final stops can flip the model's prediction without any semantic motivation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment one analysis",
"sec_num": "5.3"
},
{
"text": "(7) P = so they set about clearing the land for agriculture , setting fire to massive tracts of forest . H = as a result , the land was devastated by erosion . (N) HBMP p = N HBMP \u00acp = C BERT p = N BERT \u00acp = N (8) P = so they set about clearing the land for agriculture setting fire to massive tracts of forest H = as a result the land was devastated by erosion (N) HBMP p = C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment one analysis",
"sec_num": "5.3"
},
{
"text": "HBMP \u00acp = N BERT p =E BERT \u00acp = C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment one analysis",
"sec_num": "5.3"
},
{
"text": "BERT assigns the neutral class regardless of punctuation in examples (5) to (7), indicating that the choice of punctuation in training and test does not impact its decision. For example (8) there is no punctuation in the premise and hypothesis, but the different BERT models assign two different classes, entailment by the model trained on punctuation and contradiction by the model trained without punctuation. A possible explanation for why the accuracy of BERT does not behave similarly to that of the LSTM based models in Table 3 is that the pretraining of BERT allows the model to better ignore variations in the input. However, the HBMP model also uses pre-trained information in the form of GLoVE vectors, yet we do not see HBMP handling the discrepancy between the training and the test well. Albeit the pre-training of GLoVE and BERT are different, in the essence they are the same. Both model the meaning of words based on their surroundings in the neural architecture. Thus, the relevant difference between the models relevant to the absence or presence of punctuation is whether the model use self-attention or an LSTM to create representations of sentences. From this, we pose a tentative hypothesis that selfattention more easily learn to ignore irrelevant input tokens for a task than the LSTM. However, to confirm this we need to perform more expensive experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 526,
"end": 533,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiment one analysis",
"sec_num": "5.3"
},
{
"text": "None of the models perform very well for this dataset. The HBMP p model has an accuracy of 61.1% while the HBMP \u00acp has an accuracy of 48.8%. The BERT p model has an accuracy of 44.4% while the BERT \u00acp has an accuracy of 48.8%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment two analysis",
"sec_num": "5.4"
},
{
"text": "For example, both models are tricked by comma removal in (2). An interesting case involves cases where the comma is removed from \"No, god\" turning it into a negative quantifier \"no god\". The models are tricked when asked to infer \"There is no good god\" from \"No, god is good\" (they predict E instead of C). Another example where the models are tricked by comma removal is when listing items. In the example \"I thank, my mother, Anna Smith and John\" there are three entities being thanked. The comma placement indicates that \"Anna Smith\" is one person, and not two. Only HBMPp fails to predicts that \"I thank three people\" is an entailment for this example. The quotation examples are also challenging. Both systems are tricked when they are asked to judge whether \"I hear John speaking\" follows: a) from \"I hear John says 'come here' \", and b) \"I hear 'John says come here' \". Both HBMP models correctly predict a) but fail on b). However, they give a different wrong label, (N) for HBMP \u00acp and (E) for HBMP p . For BERT, both the model trained on p and \u00acp make the same predictions, further supporting our hypothesis that bert does not take meaningful punctuation into account, even when trained with punctuation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment two analysis",
"sec_num": "5.4"
},
{
"text": "The conclusions of this paper can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Only BERT is robust to irrelevant changes in punctuation (H1 is validated for BERT). The other models see a significant drop in performance when for any mismatch of the presence of punctuation between training and testing sets. However, the presence or absence of the full stop at the end of a sentence has little effect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "This statement rests on the observation that punctuation is generally semantically insignificant in MNLI. This fact has not been tested using a model but rather relies on manual inspection of the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "We have evidence that no model is capable of taking into account cases where punctuation is meaningful. At this stage of our research, this evidence does not rely on a large body of data. This result is not surprising because of the above observation (namely, there is not enough meaningful punctuation in the training set). Yet, we use pre-trained embeddings (BERT) which have been trained on very large dataset, and it could not be ruled out a priori that such embeddings did not contain information related to the meaning of punctuation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "As a general remark, it seems to us useful, if not necessary, to extend the present datasets for NLI to include examples where punctuation is actually meaningful. In general, this is part of a discussion of extending current datasets to include cases of inference where more fined-grained phenomena are taken into consideration Chatzikyriakidis et al. (2017); Bernardy and Chatzikyriakidis (2019 Chatzikyriakidis ( , 2020 . This also connects with the generalization capabilities of NLI models that were briefly brought up in the introduction. However, the goal should not only be to create many diverse datasets that can get very fine-grained for numerous syntactic phenomena. What we further need are models that will have the ability to generalize well to new data after they have been trained on datasets that represent a much more diverse and rich picture of NLI, and are not prone to similar problems as these have been reported in the literature (Glockner et al., 2018; Talman and Chatzikyriakidis, 2018; Wang et al., 2019; Poliak et al., 2018) .",
"cite_spans": [
{
"start": 373,
"end": 395,
"text": "Chatzikyriakidis (2019",
"ref_id": "BIBREF0"
},
{
"start": 396,
"end": 421,
"text": "Chatzikyriakidis ( , 2020",
"ref_id": "BIBREF1"
},
{
"start": 953,
"end": 976,
"text": "(Glockner et al., 2018;",
"ref_id": "BIBREF6"
},
{
"start": 977,
"end": 1011,
"text": "Talman and Chatzikyriakidis, 2018;",
"ref_id": "BIBREF9"
},
{
"start": 1012,
"end": 1030,
"text": "Wang et al., 2019;",
"ref_id": null
},
{
"start": 1031,
"end": 1051,
"text": "Poliak et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "In future work, we plan to continue pursuing the question of model generalizability by investigating how neural models for natural language inference can be adapted to take into account finegrained semantic phenomena. More specifically, how can models be adapted to learn what constitutes a meaningful part of a sentence, in terms of semantics, and what is not meaningful. We can notice that the phenomena of punctuation is primarily \"syntactic sugar\", by constructing a sentence in a certain way syntactically (by inserting or removing punctuation). To exploit this we plan to incorporate syntactic representations of sentences. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "We present the full dataset we developed below in Table 4 along with the HBMP and MNLI models prediction on the dataset. ",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "A Appendix: Dataset",
"sec_num": null
},
{
"text": "The set of punctuation symbols we consider are: '!\"#$%&() * +,-./:;<=>?@[]\\\u02c6'{}| '",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Even though one can imagine exotic, non-symmetric definitions of \"neutral\" and \"contradiction\", we are not aware of any system or dataset using such a definition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "face to face conversations, telephone ones, letters, oxford university press publications, etc.4 Relative to the number of total tokens in the MNLI dataset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Trained on 840 billion tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For clarity, the premise is indicated by P and the hypothesis by H.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research reported in this paper was supported by grant 2014-39 from the Swedish Research Council, which funds the Centre for Linguistic Theory and Studies in Probability (CLASP) in the Department of Philosophy, Linguistics, and Theory of Science at the University of Gothenburg.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "What kind of natural language inference are nlp systems learning: Is this enough?",
"authors": [
{
"first": "Jean-",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Bernardy",
"suffix": ""
},
{
"first": "Stergios",
"middle": [],
"last": "Chatzikyriakidis",
"suffix": ""
}
],
"year": 2019,
"venue": "13th International Conference on Agents and Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean-Philippe Bernardy and Stergios Chatzikyriakidis. 2019. What kind of natural language inference are nlp systems learning: Is this enough? In 13th Inter- national Conference on Agents and Artificial Intelli- gence.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improving the precision of natural textual entailment problem datasets",
"authors": [
{
"first": "Jean-",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Bernardy",
"suffix": ""
},
{
"first": "Stergios",
"middle": [],
"last": "Chatzikyriakidis",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceeding of LREC 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean-Philippe Bernardy and Stergios Chatzikyriakidis. 2020. Improving the precision of natural textual en- tailment problem datasets. In Proceeding of LREC 2020.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Samuel R Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.05326"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large anno- tated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An overview of natural language inference data collection: The way forward?",
"authors": [
{
"first": "Stergios",
"middle": [],
"last": "Chatzikyriakidis",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Cooper",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Computing Natural Language Inference Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stergios Chatzikyriakidis, Robin Cooper, Simon Dob- nik, and Staffan Larsson. 2017. An overview of natural language inference data collection: The way forward? In Proceedings of the Computing Natural Language Inference Workshop.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Supervised learning of universal sentence representations from natural language inference data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Loic",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.02364"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Breaking nli systems with sentences that require simple lexical inferences",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Glockner",
"suffix": ""
},
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.02266"
]
},
"num": null,
"urls": [],
"raw_text": "Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that require simple lexical inferences. arXiv preprint arXiv:1805.02266.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Hypothesis only baselines in natural language inference",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
},
{
"first": "Aparajita",
"middle": [],
"last": "Haldar",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.01042"
]
},
"num": null,
"urls": [],
"raw_text": "Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language infer- ence. arXiv preprint arXiv:1805.01042.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Testing the generalization power of neural network models across nli benchmarks",
"authors": [
{
"first": "Aarne",
"middle": [],
"last": "Talman",
"suffix": ""
},
{
"first": "Stergios",
"middle": [],
"last": "Chatzikyriakidis",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.09774"
]
},
"num": null,
"urls": [],
"raw_text": "Aarne Talman and Stergios Chatzikyriakidis. 2018. Testing the generalization power of neural network models across nli benchmarks. arXiv preprint arXiv:1810.09774.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sentence embeddings in nli with iterative refinement encoders",
"authors": [
{
"first": "Aarne",
"middle": [],
"last": "Talman",
"suffix": ""
},
{
"first": "Anssi",
"middle": [],
"last": "Yli-Jyr\u00e4",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2019,
"venue": "Natural Language Engineering",
"volume": "25",
"issue": "4",
"pages": "467--482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aarne Talman, Anssi Yli-Jyr\u00e4, and J\u00f6rg Tiedemann. 2019. Sentence embeddings in nli with iterative re- finement encoders. Natural Language Engineering, 25(4):467-482.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "where p is the premise representation and h the hypothesis representation. A three-layer perceptron with leaky ReLU activation between the layers then assigns a class to the example. HBMP The second model is described by Talman et al. (2019). The model is a three-layer bidirectional LSTM, wherein between the layers a representation is extracted through max pooling. The final representation for each sentence is the concatenation of all intermediate representations [h 0 ; h 1 , h 2 ]. The same representation as with the BiLSTM, [p; h; p \u2212 h; p * h] where p and h respectively is the concatenation of all intermediate representations, is then passed to a three-layer perceptron with leaky ReLU activation and dropout.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "(5) P = not yourself . H = only you . (C) HBMP p = C HBMP \u00acp = E BERT p = N BERT \u00acp = N (6) P = not yourself H = only you (C)",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"content": "<table><tr><td>: Count of punctuation symbols used in the</td></tr><tr><td>training examples of MNLI.</td></tr><tr><td>variants and test on either the p or \u00acp variants. Ad-</td></tr><tr><td>ditionally, we train on orig and test on orig, as a</td></tr><tr><td>baseline result.</td></tr><tr><td>In the second set, we designed a dataset to test</td></tr><tr><td>(H2), that is, whether NLI models are able to de-</td></tr><tr><td>tect semantically relevant punctuation. This ex-</td></tr><tr><td>periment is performed the same way as the first</td></tr><tr><td>set, but we replace the MNLI test data with our</td></tr><tr><td>own dataset. The dataset we constructed for this</td></tr><tr><td>contain a number of problems whose correct label</td></tr><tr><td>depends on the presence or absence of punctua-</td></tr><tr><td>tion. Here are some representative examples (&amp;</td></tr><tr><td>separates the premise from the hypothesis, label</td></tr><tr><td>follows in parentheses):</td></tr><tr><td>(1) I thank, my mother, Anna, Smith and John &amp;</td></tr><tr><td>I thank four people (E)</td></tr><tr><td>(2) I thank, my mother Anna, Smith and John &amp;</td></tr><tr><td>I thank two people (C)</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"content": "<table/>",
"text": "The effect on punctuation on all three models in terms of accuracy of the MNLI dataset. MA indicate the matched and MM the mismatched test split. original is trained on the unaugmented data, p models trained with punctuation and \u00acp models trained without punctuation",
"type_str": "table",
"html": null,
"num": null
},
"TABREF5": {
"content": "<table/>",
"text": "",
"type_str": "table",
"html": null,
"num": null
},
"TABREF8": {
"content": "<table/>",
"text": "",
"type_str": "table",
"html": null,
"num": null
}
}
}
}