ACL-OCL / Base_JSON /prefixB /json /blackboxnlp /2020.blackboxnlp-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:08:14.101879Z"
},
"title": "Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and Negation",
"authors": [
{
"first": "Atticus",
"middle": [],
"last": "Geiger",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": "atticusg@stanford.edu"
},
{
"first": "Kyle",
"middle": [],
"last": "Richardson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Allen Institute for AI",
"location": {}
},
"email": "kyler@allenai.org"
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": "cgpotts@stanford.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation methods of (1) challenge test sets and (2) systematic generalization tasks, and the structural evaluation methods of (3) probes and (4) interventions. To facilitate this holistic evaluation, we present Monotonicity NLI (MoNLI), a new naturalistic dataset focused on lexical entailment and negation. In our behavioral evaluations, we find that models trained on general-purpose NLI datasets fail systematically on MoNLI examples containing negation, but that MoNLI fine-tuning addresses this failure. In our structural evaluations, we look for evidence that our top-performing BERT-based model has learned to implement the monotonicity algorithm behind MoNLI. Probes yield evidence consistent with this conclusion, and our intervention experiments bolster this, showing that the causal dynamics of the model mirror the causal dynamics of this algorithm on subsets of MoNLI. This suggests that the BERT model at least partially embeds a theory of lexical entailment and negation at an algorithmic level.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation methods of (1) challenge test sets and (2) systematic generalization tasks, and the structural evaluation methods of (3) probes and (4) interventions. To facilitate this holistic evaluation, we present Monotonicity NLI (MoNLI), a new naturalistic dataset focused on lexical entailment and negation. In our behavioral evaluations, we find that models trained on general-purpose NLI datasets fail systematically on MoNLI examples containing negation, but that MoNLI fine-tuning addresses this failure. In our structural evaluations, we look for evidence that our top-performing BERT-based model has learned to implement the monotonicity algorithm behind MoNLI. Probes yield evidence consistent with this conclusion, and our intervention experiments bolster this, showing that the causal dynamics of the model mirror the causal dynamics of this algorithm on subsets of MoNLI. This suggests that the BERT model at least partially embeds a theory of lexical entailment and negation at an algorithmic level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural Language Inference (NLI) keys into fundamental aspects of how people reason with language. Although NLI is generally cast in informal terms that embrace the indeterminacy of such reasoning, the task nonetheless manifests a number of very predictable reasoning patterns. For example, systematic manipulations of the lexical meanings (Glockner et al., 2018) , syntactic constructions (Nie et al., 2019a) , and contextual assumptions (Pavlick and Callison-Burch, 2016) have systematic effects on the correct labels. These patterns present crisp, motivated learning targets that we can leverage to not only evaluate the ability of NLI models to learn robust solutions, but also to analyze the internal dynamics of successful models.",
"cite_spans": [
{
"start": 340,
"end": 363,
"text": "(Glockner et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 390,
"end": 409,
"text": "(Nie et al., 2019a)",
"ref_id": "BIBREF32"
},
{
"start": 439,
"end": 473,
"text": "(Pavlick and Callison-Burch, 2016)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, our learning target concerns the role of monotonicity in NLI (MacCartney, 2009; Icard and Moss, 2013) . Specifically, we would like to determine whether models can learn to represent lexical relations and accurately model that negation reverses entailment relations (e.g., dance entails move, but not move entails not dance). This property of negation is downward monotonicity.",
"cite_spans": [
{
"start": 76,
"end": 94,
"text": "(MacCartney, 2009;",
"ref_id": "BIBREF26"
},
{
"start": 95,
"end": 116,
"text": "Icard and Moss, 2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In service of pursuing this question, we present Monotonicity NLI (MoNLI), a new naturalistic NLI dataset for training and assessing systems on these semantic notions (Section 3). MoNLI extends SNLI (Bowman et al., 2015) to provide comprehensive coverage of examples that depend on lexical reasoning with and without negation. Using MoNLI, we conduct both behavioral and structural evaluations, seeking to provide a detailed picture of the solutions that top-performing models learn. We evaluate Enhanced Sequential Inference Models (Chen et al., 2016) and BERT-based models (Devlin et al., 2019) , along with standard baselines.",
"cite_spans": [
{
"start": 194,
"end": 220,
"text": "SNLI (Bowman et al., 2015)",
"ref_id": null
},
{
"start": 533,
"end": 552,
"text": "(Chen et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 575,
"end": 596,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous work evaluating the ability of neural models to learn monotonicity has focused on challenge test sets and systematic generalization tasks (Yanaka et al., 2019b,a; Geiger et al., 2019; Richardson et al., 2019) . These behavioral evaluations ask whether models achieve a desired inputoutput behavior. We employ these methods as well, but we also ask whether models achieve an algorithmic-level learning target, in the terms of Marr (1982) . Monotonicity reasoning can be cast as an algorithm that solves MoNLI perfectly. Do neural models implement this algorithm?",
"cite_spans": [
{
"start": 147,
"end": 171,
"text": "(Yanaka et al., 2019b,a;",
"ref_id": null
},
{
"start": 172,
"end": 192,
"text": "Geiger et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 193,
"end": 217,
"text": "Richardson et al., 2019)",
"ref_id": "BIBREF37"
},
{
"start": 434,
"end": 445,
"text": "Marr (1982)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first report on two behavioral evaluations (Section 5). When MoNLI is used as a challenge test set, we find that models trained on SNLI and/or MNLI (Williams et al., 2018) fail to reason with lex-ical entailments when negation is involved. However, we trace these failures to gaps in the training data. In response, we pose a systematic generalization task in which we expose models to MoNLI examples through fine-tuning while still requiring them to generalize to entirely new pairs of lexical items in negated linguistic contexts at test time. All our models solve the task, which suggests that they have learned general theories of lexical entailment and negation.",
"cite_spans": [
{
"start": 151,
"end": 174,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We then report on structural evaluations (Section 6), seeking to determine whether our topperforming BERT-based models implement the target monotonicity algorithm. In probing experiments, we find evidence consistent with this result, but it's not conclusive, since probes alone cannot reveal a model's causal dynamics. However, our intervention experiments provide evidence that BERT does mirror the causal dynamics of the monotonicity algorithm, at least on large subsets of MoNLI. We conclude that this model at least partially embeds a theory of lexical entailment and negation at an algorithmic level, in addition to fully achieving the correct input-output behavior on MoNLI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Monotonicity Our empirical focus is entailment and negation. This is one (highly prevalent) aspect of monotonicity reasoning, which governs many aspects of lexical and constructional meaning in natural language (S\u00e1nchez-Valencia, 1991; van Benthem, 2008) . There is an extensive literature on monotonicity logics (Moss, 2009; Icard, 2012; Icard and Moss, 2013; . Within NLP, MacCartney and Manning (2008, 2009) apply very rich monotonicity algebras to NLI problems, Hu et al. (2019a,b) create NLI models that use polarity-marked parse trees, and Yanaka et al. (2019a,b) and Geiger et al. (2019) investigate the ability of neural models to understand natural logic reasoning. While we consider only a small fragment of these approaches, the methods we develop should apply to more complex systems as well.",
"cite_spans": [
{
"start": 211,
"end": 235,
"text": "(S\u00e1nchez-Valencia, 1991;",
"ref_id": "BIBREF38"
},
{
"start": 236,
"end": 254,
"text": "van Benthem, 2008)",
"ref_id": "BIBREF2"
},
{
"start": 313,
"end": 325,
"text": "(Moss, 2009;",
"ref_id": "BIBREF30"
},
{
"start": 326,
"end": 338,
"text": "Icard, 2012;",
"ref_id": "BIBREF20"
},
{
"start": 339,
"end": 360,
"text": "Icard and Moss, 2013;",
"ref_id": "BIBREF22"
},
{
"start": 363,
"end": 389,
"text": "Within NLP, MacCartney and",
"ref_id": null
},
{
"start": 390,
"end": 410,
"text": "Manning (2008, 2009)",
"ref_id": null
},
{
"start": 466,
"end": 485,
"text": "Hu et al. (2019a,b)",
"ref_id": null
},
{
"start": 546,
"end": 569,
"text": "Yanaka et al. (2019a,b)",
"ref_id": null
},
{
"start": 574,
"end": 594,
"text": "Geiger et al. (2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Challenge Test Sets Challenge 1 test sets are supplementary evaluation resources that test the ability of a model to generalize to examples outside the dis-tribution of the data it was trained, developed, and (standardly) tested on. These tests probe the generalization capabilities of state-of-the-art models with respect to the tasks they have been trained on, by focusing on difficult or underrepresented examples in a model's training set (Jia and Liang, 2017; Naik et al., 2018; Glockner et al., 2018; Richardson et al., 2019; Talmor et al., 2019) . Fodor and Pylyshyn (1988) offer systematicity as a hallmark of human cognition. Systematicity says that certain behaviors are intrinsically connected to others by compositional structures. For example, understanding the puppy loves Sandy is intrinsically connected to understanding Sandy loves the puppy. For Fodor and Pylyshyn, these observations trace to the mind's ability to recombine known parts and rules. There are often strong intuitions that certain generalization tasks are only solved by models with systematic structures. These tasks are referred to as systematic generalization tasks (Lake and Baroni, 2018; Hupkes et al., 2019; Yanaka et al., 2020; Bahdanau et al., 2018; Geiger et al., 2019; Goodwin et al., 2020) .",
"cite_spans": [
{
"start": 443,
"end": 464,
"text": "(Jia and Liang, 2017;",
"ref_id": "BIBREF23"
},
{
"start": 465,
"end": 483,
"text": "Naik et al., 2018;",
"ref_id": "BIBREF31"
},
{
"start": 484,
"end": 506,
"text": "Glockner et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 507,
"end": 531,
"text": "Richardson et al., 2019;",
"ref_id": "BIBREF37"
},
{
"start": 532,
"end": 552,
"text": "Talmor et al., 2019)",
"ref_id": "BIBREF39"
},
{
"start": 555,
"end": 580,
"text": "Fodor and Pylyshyn (1988)",
"ref_id": "BIBREF8"
},
{
"start": 1162,
"end": 1175,
"text": "Baroni, 2018;",
"ref_id": "BIBREF24"
},
{
"start": 1176,
"end": 1196,
"text": "Hupkes et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 1197,
"end": 1217,
"text": "Yanaka et al., 2020;",
"ref_id": "BIBREF44"
},
{
"start": 1218,
"end": 1240,
"text": "Bahdanau et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 1241,
"end": 1261,
"text": "Geiger et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 1262,
"end": 1283,
"text": "Goodwin et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Probing Probes are supervised learning models trained to extract information from representations created by another model. They are a primary tool in the analysis of neural network models (Peters et al. 2018; Tenney et al. 2019; Clark et al. 2019 ; for a full review, see Belinkov and Glass 2019) . In aggregate, this work has provided nuanced insights into the internal representations of these models, as well as their capacity to directly support learning diverse NLP tasks via fine-tuning (Hewitt and Liang, 2019) . However, probes are only able to reveal how representations correlate with information. They cannot determine if that information plays a causal role in model predictions (Belinkov and Glass, 2019; Vig et al., 2020) .",
"cite_spans": [
{
"start": 189,
"end": 209,
"text": "(Peters et al. 2018;",
"ref_id": "BIBREF36"
},
{
"start": 210,
"end": 229,
"text": "Tenney et al. 2019;",
"ref_id": "BIBREF40"
},
{
"start": 230,
"end": 247,
"text": "Clark et al. 2019",
"ref_id": "BIBREF5"
},
{
"start": 273,
"end": 297,
"text": "Belinkov and Glass 2019)",
"ref_id": "BIBREF1"
},
{
"start": 494,
"end": 518,
"text": "(Hewitt and Liang, 2019)",
"ref_id": "BIBREF14"
},
{
"start": 692,
"end": 718,
"text": "(Belinkov and Glass, 2019;",
"ref_id": "BIBREF1"
},
{
"start": 719,
"end": 736,
"text": "Vig et al., 2020)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Systematic Generalization Tasks",
"sec_num": null
},
{
"text": "Interventions Intervention studies go beyond probing to make changes to the internal states of a network, with the goal of observing how those changes affect system outputs. Giulianelli et al. (2018) use probing results to make informed interventions during LSTM language model predictions to preserve information about the grammatical subject's number, and this led to improved performance in subject-verb agreement. Vig et al. (2020) use interventions to characterize how gender bias is represented in the internal causal structure of a model, and find that a small number of synergistic neurons mediate gender bias. They also find that the effect of these neurons is roughly linearly separable from the effect of the remainder of the model, a remarkable finding considering the highly non-linear nature of neural networks.",
"cite_spans": [
{
"start": 174,
"end": 199,
"text": "Giulianelli et al. (2018)",
"ref_id": "BIBREF10"
},
{
"start": 418,
"end": 435,
"text": "Vig et al. (2020)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Systematic Generalization Tasks",
"sec_num": null
},
{
"text": "We created the MoNLI corpus to investigate the ability of NLI models to learn the compositional interactions between lexical entailment and negation. MoNLI contains 2,678 NLI examples in the usual format for NLI datasets like SNLI. In each example, the hypothesis is the result of substituting a single word w p in the premise for a hypernym or hyponym w h . We refer to w h and w p as the substituted words in an example. In 1,202 of these examples, the substitution is performed under the scope of the downward monotone operator not. Downward monotone operators reverse entailment relations: dance entails move, but not move entails not dance. We refer to these examples collectively as NMoNLI. In the remaining 1,476 examples, this substitution is performed under the scope of no downward monotone operator. We refer to these examples collectively as PMoNLI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity NLI dataset",
"sec_num": "3"
},
{
"text": "MoNLI was generated according to the following procedure. First, randomly select a premise or hypothesis sentence s from the SNLI training dataset. Second, select a noun in s, and, using WordNet (Fellbaum, 1998) , select all hypernyms and hyponyms of the noun subject to two conditions: (1) the hypernym or hyponym appears in the SNLI training data, and (2) substituting the hypernym or hyponym results in a grammatical, coherent sentence s . Finally, for each substitution, generate two examples for the corpus -one where the original sentence is the premise and the edited sentence is the hypothesis, and one example with those roles reversed. Each of these example pairs has one example with the label entailment and one example with the label neutral, resulting in a dataset perfectly balanced between the two labels.",
"cite_spans": [
{
"start": 195,
"end": 211,
"text": "(Fellbaum, 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity NLI dataset",
"sec_num": "3"
},
{
"text": "For example, suppose we select the SNLI sentence (A) and we identify the noun plants for substitution. Then we enter plants into WordNet and find that flowers is a hyponym of plants, so we substitute flowers for plants to create the edited sentence (B):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity NLI dataset",
"sec_num": "3"
},
{
"text": "(A) The three children are not holding plants. \u21d3 (B) The three children are not holding flowers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity NLI dataset",
"sec_num": "3"
},
{
"text": "This leads to two new MoNLI examples:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity NLI dataset",
"sec_num": "3"
},
{
"text": "(A) entailment (B) (B) neutral (A)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity NLI dataset",
"sec_num": "3"
},
{
"text": "These two examples would belong to NMoNLI, due to not scoping over the substitution site. If not were removed from both of these sentences, then their labels would be swapped and both examples would belong to PMoNLI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity NLI dataset",
"sec_num": "3"
},
{
"text": "MoNLI was generated by the authors by hand; examples judged to be unnatural were removed, and any grammatical or spelling errors in the original SNLI sentence were corrected. This data generation process is similar to that of Glockner et al. (2018) , except they focus on the lexical relations of exclusion and synonymy, while we focus on entailment relations. This difference prevents their dataset from capturing monotonicity reasoning, which involves entailment relations, but not exclusion or synonymy.",
"cite_spans": [
{
"start": 226,
"end": 248,
"text": "Glockner et al. (2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monotonicity NLI dataset",
"sec_num": "3"
},
{
"text": "We evaluated four models on MoNLI:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "CBOW The continuous bag of words baseline from Williams et al. (2018) .",
"cite_spans": [
{
"start": 47,
"end": 69,
"text": "Williams et al. (2018)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "BiLSTM The bidirectional LSTM baseline from Williams et al. (2018) .",
"cite_spans": [
{
"start": 44,
"end": 66,
"text": "Williams et al. (2018)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "ESIM The Enhanced Sequential Inference Model (Chen et al., 2016 ) is a hybrid TreeLSTMbased and biLSTM-based model that uses an inter-sentence attention mechanism to align words across sentences.",
"cite_spans": [
{
"start": 45,
"end": 63,
"text": "(Chen et al., 2016",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "BERT A Transformer model trained to do masked language modeling and next-sentence prediction (Devlin et al., 2019) . We rely on uncased BERT-base parameters from Hugging Face transformers (Wolf et al., 2019) .",
"cite_spans": [
{
"start": 93,
"end": 114,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 188,
"end": 207,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "The first two models serve as baselines, while the other two models achieve comparable, near state-of-the-art scores on SNLI. challenge test dataset that evaluates an NLI model's ability to perform simple inferences founded in lexical entailments and monotonicity. As discussed in Section 3, it is not especially adversarial, in that we sampled sentences from the SNLI training set and only substituted in hypernyms and hyponyms that occur in the SNLI training set. This keeps MoNLI as close as possible to the distribution of SNLI. Thus, if a model fails on MoNLI, we can be confident that this failure stems from a lack of knowledge about monotonicity and lexical entailment relations, rather than some other confounding factor like syntactic structures or vocabulary items that were unseen in training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "The results are in Table 1 ",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 26,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1.1"
},
{
"text": "While these models trained on SNLI do not know that not is downward monotone in these examples, this is not conclusive evidence that they are unable to learn this semantic property. This ability might not be necessary for success on SNLI, where only 38 examples have negation in both the premise and hypothesis. A natural next step is to train on MNLI, where the coverage with regard to negation is better: about 18K examples (\u22484%) have negation in the premise and hypothesis. We tried this, by combining MNLI with SNLI, and the results were almost exactly the same. However, even the MNLI examples might not manifest the kind of monotonicity reasoning that we are targeting. Our next experiments help to resolve this issue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.1.2"
},
{
"text": "Our three models trained on SNLI have knowledge of the lexical relations between substituted words, but do not know that the presence of not reverses the relationship between the word-level relation and the sentence-level relation. We now conduct a behavioral evaluation to determine whether models are able to learn a general theory of lexical entailment and negation when exposed to a limited subset of NMoNLI during training. In designing systematic generalization tasks, we seek to constrain the training data in ways that prevent unsystematic models from succeeding. Defining disjoint train/test splits is enough to foil truly unsystematic models (e.g., simple look-up tables). However, building on much previous work (Lake and Baroni, 2018; Hupkes et al., 2019; Yanaka et al., 2020; Bahdanau et al., 2018; Goodwin et al., 2020; Geiger et al., 2019) , we contend that a randomly constructed disjoint train/test split only diag-noses the most basic level of systematicity. More difficult systematic generalization tasks will only be solved by models exhibiting more complex compositional structures. Specifically, we want our systematic generalization task to be solved only by models that compute lexical entailment relations that may be reversed by negation. A learning model that memorizes labels based on substituted word pairs and whether negation is present would succeed on a disjoint train and test set as long as all pairs of substituted words appear during training, and this model does not compute the lexical relation between word pairs.",
"cite_spans": [
{
"start": 733,
"end": 746,
"text": "Baroni, 2018;",
"ref_id": "BIBREF24"
},
{
"start": 747,
"end": 767,
"text": "Hupkes et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 768,
"end": 788,
"text": "Yanaka et al., 2020;",
"ref_id": "BIBREF44"
},
{
"start": 789,
"end": 811,
"text": "Bahdanau et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 812,
"end": 833,
"text": "Goodwin et al., 2020;",
"ref_id": "BIBREF12"
},
{
"start": 834,
"end": 854,
"text": "Geiger et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Systematic Generalization Task",
"sec_num": "5.2"
},
{
"text": "As such, we propose a generalization task where NMoNLI is partitioned into train and test sets such that the substituted words in the train set and the substituted words in the test sets are disjoint. 2 The specific train/test split we used is described in Appendix A.1. Ideally, a model trained on SNLI that is further trained on NMoNLI will still maintain strong performance on SNLI. We use inoculation by fine-tuning (Liu et al., 2019) to evaluate models on this ability. We report on the inoculated model with the highest average performance on SNLI test and NMoNLI test (full details of the inoculation process are in Appendix A.2).",
"cite_spans": [
{
"start": 201,
"end": 202,
"text": "2",
"ref_id": null
},
{
"start": 420,
"end": 438,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Systematic Generalization Task",
"sec_num": "5.2"
},
{
"text": "The models are evaluated on examples where they know the relation between the substituted words, as evidenced by high performance on PMoNLI, but have not seen those substituted words in the presence of negation during training. However, they have seen other substituted words with the same relation in the presence of negation during training, making this task hard, but fair (Geiger et al., 2019) . To solve this harder generalization task, we believe a model must learn to reverse the lexical relation in general; the identity of the substituted words must be abstracted away.",
"cite_spans": [
{
"start": 376,
"end": 397,
"text": "(Geiger et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Systematic Generalization Task",
"sec_num": "5.2"
},
{
"text": "We present our results in Table 1 , under the heading 'With NMoNLI fine-tuning'. All of our models solve this generalization task. However, only BERT does so while maintaining high performance on SNLI. We also report ablation studies on our two non-baseline models, evaluating their performance on our systematic generalization task without training on SNLI and without any pretraining at all. We find that both models still succeed with no pre- 2 We use only NMoNLI in our systematic generalization task because models trained on SNLI already achieve high performance on PMoNLI.",
"cite_spans": [
{
"start": 446,
"end": 447,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2.1"
},
{
"text": "1 lexrel \u2190 GET-LEX-REL(MoNLIexample) 2 if CONTAINS-NOT (MoNLIexample) 3 return REVERSE(lexrel ) 4 return lexrel Figure 1 : An algorithm able to solve the MoNLI dataset that provides a theoretically motivated learning target for neural models at an algorithmic level of analysis (Marr, 1982) . INFER takes in an example from MoNLI and outputs the relation between the premise and hypothesis. It uses three predefined functions. GET-LEX-REL returns the relation (one of { , }) between the substituted words in the premise and hypothesis. CONTAINS-NOT returns true iff negation is present. RE-VERSE maps to and vice-versa. training on SNLI, but fail with no pretraining whatsoever. This suggests that BERT pretraining and GloVe vectors both provide sufficient information about lexical relations for the models to succeed. BERT's ability to get slightly above chance performance with no pretraining indicates the presence of some statistical artifacts in our dataset (Gururangan et al., 2018) .",
"cite_spans": [
{
"start": 279,
"end": 291,
"text": "(Marr, 1982)",
"ref_id": "BIBREF29"
},
{
"start": 965,
"end": 990,
"text": "(Gururangan et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 55,
"end": 72,
"text": "(MoNLIexample) 3",
"ref_id": null
},
{
"start": 113,
"end": 121,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "INFER(MoNLIexample)",
"sec_num": null
},
{
"text": "In sum, our models were able to solve our systematic generalization task, which we believe to be evidence that they learn to compute the lexical relations between substituted words. However, we also believe this evidence is weak, as there is no formal relationship between a model solving a generalization task and that model having any particular systematic internal structures. This evaluation is fundamentally behavioral, only concerning model inputs and outputs. We believe that a structural evaluation is necessary to conclusively evaluate systematicity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INFER(MoNLIexample)",
"sec_num": null
},
{
"text": "In our behavioral evaluations, the learning target was to mimic the input-output behavior defined by MoNLI. Assessing this learning target is straightforward. We now report on structural evaluations to try to determine whether a neural model has particular internal dynamics. For this, we rely on very recent probing and intervention methodologies that are not yet well understood and must be tailored to the model being analyzed. As such, we choose to focus on a single model, namely, the BERT model from Section 5 fine-tuned on NMoNLI. We chose BERT because it achieved exceptional results on (Figure 1 ). Selectivity is probe accuracy minus control probe accuracy (Hewitt and Liang, 2019) . The grey dotted line provides a soft ceiling for selectivity values, because we expect control probes trained on a binary task to at least achieve chance accuracy.",
"cite_spans": [
{
"start": 667,
"end": 691,
"text": "(Hewitt and Liang, 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 595,
"end": 604,
"text": "(Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Structural Evaluations",
"sec_num": "6"
},
{
"text": "NMoNLI after fine-tuning without experiencing a significant drop on SNLI. Intuitively, if our BERT model implements this algorithm, there will be some representation in BERT that stores lexrel and BERT will use that representation for a final prediction. Probes can give us an idea of where information is stored, and interventions help us see how that information is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Evaluations",
"sec_num": "6"
},
{
"text": "Before we can go looking for where BERT stores and uses lexrel , we must limit ourselves to a tractable number of model internal representations. When our BERT model processes an example from MoNLI, it is tokenized as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Evaluations",
"sec_num": "6"
},
{
"text": "e = [CLS], p, [SEP], h, [SEP]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Evaluations",
"sec_num": "6"
},
{
"text": "and 12 rows of vector representations are created, so each token is associated with 12 vectors. We localize our efforts to the representations created for [CLS] and the tokens for the substituted words in the premise and hypothesis, w p and w h (as described in Section 3). This narrows our search to 36 possible vector locations where BERT could be storing the variable lexrel for use in final output prediction. We denote these 36 locations with BERT r wp , BERT r w h , and BERT r [CLS] where r is a row (1 r 12).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Evaluations",
"sec_num": "6"
},
{
"text": "We follow in using probing evidence to determine whether a neural model stores the same information as a symbolic algorithm. They used probes to predict variable values used in an algorithm from the hidden states of sequential recurrent networks trained to perform basic arithmetic. We do something similar, probing the 36 vector locations defined by BERT r wp , BERT r w h , and BERT r [CLS] for the value of the variable lexrel and the output of INFER. Hewitt and Liang (2019) argue that accuracy is a poor metric for probes and that the ideal probe will highly selective, that is, it will have high accuracy on a linguistic task but low accuracy on a control task where inputs are given random labels. In this setting, our linguistic tasks are predicting the value of lexrel and the output of INFER from a modelinternal vector created by BERT for some MoNLI example. Our control task is identical, except labels are randomly assigned to inputs. Hewitt and Liang demonstrate that small, linear probes result in high selectivity. Following this guidance, we used a linear classifier with 4 hidden units that was trained and evaluated on all of MoNLI.",
"cite_spans": [
{
"start": 455,
"end": 478,
"text": "Hewitt and Liang (2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probes",
"sec_num": "6.1"
},
{
"text": "Our probing results are summarized in Figure 2 . Probes were able to achieve high accuracy and high selectivity predicting the output of INFER at every location other than the locations BERT k",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 46,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Probes",
"sec_num": "6.1"
},
{
"text": "[CLS] where 1 \u2264 k \u2264 4, and high accuracy and high selectivity predicting the value of lexrel at every location other than BERT 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probes",
"sec_num": "6.1"
},
{
"text": "[CLS] and BERT 2 [CLS] . This qualitative picture is compatible with a story where BERT stores the value of lexrel at any location other than BERT 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probes",
"sec_num": "6.1"
},
{
"text": "[CLS] or BERT 2 [CLS] and then uses this information to compute a final output prediction at any location other than the locations BERT k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probes",
"sec_num": "6.1"
},
{
"text": "[CLS] where 1 \u2264 k \u2264 4. The fact that probes trained on the vectors at locations BERT 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probes",
"sec_num": "6.1"
},
{
"text": "[CLS] or BERT 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probes",
"sec_num": "6.1"
},
{
"text": "[CLS] have high accuracy and selectivity predicting the value of lexrel , but moderate accuracy and low selectivity predicting the output of INFER may suggest a more specific story where these two locations store the value of the variable lexrel before this information is used to compute the final output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probes",
"sec_num": "6.1"
},
{
"text": "We emphasize that, while the probing results are compatible with these stories, they only provide conclusive evidence about how representations correlate with the value of lexrel and the output of INFER. They cannot determine whether this information plays a causal role in model predictions (Belinkov and Glass, 2019; Vig et al., 2020) .",
"cite_spans": [
{
"start": 292,
"end": 318,
"text": "(Belinkov and Glass, 2019;",
"ref_id": "BIBREF1"
},
{
"start": 319,
"end": 336,
"text": "Vig et al., 2020)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probes",
"sec_num": "6.1"
},
{
"text": "Probes give us a picture of where information is stored by our BERT model, but they cannot determine whether that information is used to make final predictions. Interventions can help us address this deeper question. As discussed above, our algorithmic-level learning target is for BERT to mimic the dynamics of the algorithm INFER in Figure 1 . Icard (2017) provided the insight that algorithms like INFER can be explicitly understood as causal models (Pearl, 2001) . This means that the causal role of lexrel , the lone variable in INFER, can be characterized with counterfactual claims about how altering the value of the variable would cause output behavior to change.",
"cite_spans": [
{
"start": 346,
"end": 358,
"text": "Icard (2017)",
"ref_id": "BIBREF21"
},
{
"start": 453,
"end": 466,
"text": "(Pearl, 2001)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 335,
"end": 343,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Interventions",
"sec_num": "6.2"
},
{
"text": "Suppose INFER is run on a MoNLI example i. Let lexrel (i) \u2208 { , } be the value that lexrel takes on, and let INFER(i) \u2208 { , } be the output. Then INFER can be see as providing the following counterfactual characterization of lexrel : if the value of lexrel were changed from lexrel (i) to lexrel (j), where j is a second MoNLI example, then INFER(i) would change to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interventions",
"sec_num": "6.2"
},
{
"text": "INFER lexrel(i)\u2192lexrel(j) (i) = INFER(i) lexrel (i) = lexrel (j) REVERSE(INFER(i)) lexrel (i) = lexrel (j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interventions",
"sec_num": "6.2"
},
{
"text": "In other words, if lexrel were to take on the opposite value, then the output would also take on the opposite value. Our analytic tool for evaluating whether such causal dynamics are present in BERT is the interchange intervention. Figure 3 provides Figure 3 : An illustrative interchange intervention:",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 249,
"text": "Figure 3 provides",
"ref_id": null
},
{
"start": 250,
"end": 258,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Interventions",
"sec_num": "6.2"
},
{
"text": "The solid arrows represent a hypothesis about where the model stores and uses information about lexical entailment. The dotted arrow is an interchange intervention, where the green vector (top) we think stores reverse entailment, trees elms, is interchanged with the red vector (middle) we think stores forward entailment, pugs dogs, leading to a modified network (bottom). If our hypothesis is correct, then the output should change from entailment to neutral, because the negation in the green example reverses the relationship between lexical entailment and sentence-level entailment. If this label reversal is not observed, crucial entailment information must lie elsewhere in the network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interventions",
"sec_num": "6.2"
},
{
"text": "picture of how these experiments work, and the following definition seeks to make this more precise and general:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interventions",
"sec_num": "6.2"
},
{
"text": "Interchange Intervention Let L be one of the 36 locations defined by BERT r wp , BERT r w h , and BERT r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interventions",
"sec_num": "6.2"
},
{
"text": "[CLS] . When BERT is making a prediction for i, suppose that the vector created at location L on input i is replaced with the vector created at location L on input j and this results in the output y. We say that y is the result of an interchange intervention from i to j at location L and denote this output as BERT L(i)\u2192L(j) (i).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interventions",
"sec_num": "6.2"
},
{
"text": "In essence, BERT L(i)\u2192L(j) (i) characterizes the output behavior that results from an experiment where model-internal vectors are interchanged at location L. Recall that INFER lexrel(i)\u2192lexrel(j) (i) describes what output is provided by INFER if variables are interchanged. If for some subset of MoNLI S, we believe that BERT is both storing the value of lexrel at some location L and using that information to make a final prediction, then for all i, j \u2208 S the following should hold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interventions",
"sec_num": "6.2"
},
{
"text": "INFER lexrel(i)\u2192lexrel(j) (i) = BERT L(i)\u2192L(j) (i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interventions",
"sec_num": "6.2"
},
{
"text": "This amounts to observing that the variables in the algorithm and the vectors in the model satisfy the same counterfactual claims. When a vector representing forward entailment is interchanged with a different vector representing forward entailment, model output behavior should be unchanged. If a vector representing forward entailment is interchanged with a different vector representing reverse entailment, then the model output should be reversed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interventions",
"sec_num": "6.2"
},
{
"text": "Results Due to computational constraints, we randomly conducted interchange experiments at our 36 different locations and chose the location with the most promise, namely, BERT 3 w h . (Appendix A.3 covers our selection methodology in detail.) We conducted \u22487 million interchange experiments at this location, one experiment for every pair of examples in MoNLI. Using a simple greedy algorithm, we discovered several large subsets of MoNLI where BERT mimics the causal dynamics of INFER. (The greedy algorithm is described in Appendix A.3.) These subsets have size 98, 63, 47, and 37, and for each of these subsets there are many pairs of examples with interchange experiments that had a causal impact on the final model prediction. To put these results in context, if interchange experiments had a random effect on model output, then the expected number of subsets larger than 20 with this property would be less than 10 \u22128 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interventions",
"sec_num": "6.2"
},
{
"text": "Discussion These results show that the values assigned by the algorithm INFER to the variable lexrel and the vectors created by BERT at the location BERT 3 w h exhibit the same causal dynamics on four large subsets of MoNLI. In Appendix A.3 we show a visualization of the subset with 98 examples. These pairs contain only 13 of the 69 distinct hyponyms in MoNLI, which makes it clear that this subset of MoNLI is not a random sample, but rather reflects a coherent semantic space. From this we conclude that, in addition to capturing the input-output behavior described by MoNLI, our BERT model at least partially embeds a theory of lexical entailment and negation at an algorithmic level of analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interventions",
"sec_num": "6.2"
},
{
"text": "Importantly, these results do not show that BERT fails to mimic the causal dynamics of INFER on larger subsets of MoNLI. First, we only conducted interchange experiments for every pair of examples in MoNLI at the location BERT 3 w h . Second, we did not consider the possibility that BERT stores and uses the value of lexrel at different locations, depending on which input is provided. Third, analyzing vector representations may be too coarsegrained; perhaps experiments will need to be done on individual vector units. Finally, we used a greedy algorithm to discover the four subsets of MoNLI. We did not exhaustively analyze BERT to find the largest subset of MoNLI on which it mimics the causal dynamics of INFER; such an analysis is likely computationally impossible. What we did do is perform an efficient analysis that was able to find several large subsets of MoNLI on which the desired causal dynamics are present.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interventions",
"sec_num": "6.2"
},
{
"text": "To operationalize our research question of whether neural NLI models can learn the compositional interactions between lexical entailment and negation, we constructed two learning targets for neural NLI models: (1) learn the input-output behavior described by MoNLI and (2) acquire the internal dynamics of the algorithm INFER. We evaluated the first learning target with two behavioral evaluation methods, using challenge datasets to show that state-of-the-art models trained on general-purpose NLI datasets fail to exhibit the correct behavior when negation is present and then following up with a systematic generalization task that showed our models are able to learn the correct inputoutput behavior when fine-tuned on a limited, but sufficient, subset of NMoNLI. We evaluated the second learning target with two structural evaluation methods, using probes to investigate where information about the variable lexrel from INFER might be stored in a BERT model and using interventions to show that on some subsets of MoNLI our BERT model exhibits the same causal dynamics as the algorithm INFER.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We believe that our holistic evaluation, leveraging both behavioral and structural methods, provides a multifaceted picture of how neural NLI models treat lexical entailment and negation. While our interchange intervention methodology is not yet formally grounded, there is great promise in the idea of investigating whether a neural model mirrors the causal dynamics of an algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Though adversarial and challenge are sometimes used synonymously, we opt for the term challenge, because our dataset was designed with the intention of evaluating whether a model learned a particular phenomenon, as opposed to breaking any particular model (cf.Nie et al. 2019b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Systematic generalization: What is required and can it be learned?",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Shikhar",
"middle": [],
"last": "Murty",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Noukhovitch",
"suffix": ""
},
{
"first": "Thien",
"middle": [],
"last": "Huu Nguyen",
"suffix": ""
},
{
"first": "Harm",
"middle": [],
"last": "De Vries",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
}
],
"year": 2018,
"venue": "In Proceedings of the 6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Shikhar Murty, Michael Noukhovitch, Thien Huu Nguyen, Harm de Vries, and Aaron Courville. 2018. Systematic generaliza- tion: What is required and can it be learned? In In Proceedings of the 6th International Conference on Learning Representations, Beijing.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Analysis methods in neural language processing: A survey",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "49--72",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00254"
]
},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49-72.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A brief history of natural logic",
"authors": [
{
"first": "Johan",
"middle": [],
"last": "Van Benthem",
"suffix": ""
}
],
"year": 2008,
"venue": "Logic, Navya-Nyaya and Applications: Homage to Bimal Matilal",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johan van Benthem. 2008. A brief history of natu- ral logic. In Logic, Navya-Nyaya and Applications: Homage to Bimal Matilal.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1075"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Enhancing and combining sequential and tree LSTM for natural language inference",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, and Hui Jiang. 2016. Enhancing and combining sequen- tial and tree LSTM for natural language inference. CoRR, abs/1609.06038.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "What does BERT look at? an analysis of BERT's attention",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "276--286",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4828"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT's attention. In Pro- ceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "WordNet: An Electronic Database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Database. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Connectionism and cognitive architecture: A critical analysis",
"authors": [
{
"first": "Jerry",
"middle": [
"A"
],
"last": "Fodor",
"suffix": ""
},
{
"first": "Zenon",
"middle": [
"W"
],
"last": "Pylyshyn",
"suffix": ""
}
],
"year": 1988,
"venue": "Cognition",
"volume": "28",
"issue": "1",
"pages": "3--71",
"other_ids": {
"DOI": [
"10.1016/0010-0277(88)90031-5"
]
},
"num": null,
"urls": [],
"raw_text": "Jerry A. Fodor and Zenon W. Pylyshyn. 1988. Connec- tionism and cognitive architecture: A critical analy- sis. Cognition, 28(1):3-71.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Posing fair generalization tasks for natural language inference",
"authors": [
{
"first": "Atticus",
"middle": [],
"last": "Geiger",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Cases",
"suffix": ""
},
{
"first": "Lauri",
"middle": [],
"last": "Karttunen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4485--4495",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1456"
]
},
"num": null,
"urls": [],
"raw_text": "Atticus Geiger, Ignacio Cases, Lauri Karttunen, and Christopher Potts. 2019. Posing fair generalization tasks for natural language inference. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 4485-4495, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information",
"authors": [
{
"first": "Mario",
"middle": [],
"last": "Giulianelli",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Harding",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Mohnert",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "240--248",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5426"
]
},
"num": null,
"urls": [],
"raw_text": "Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Un- der the hood: Using diagnostic classifiers to in- vestigate and improve how language models track agreement information. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and In- terpreting Neural Networks for NLP, pages 240-248, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Breaking NLI systems with sentences that require simple lexical inferences",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Glockner",
"suffix": ""
},
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "650--655",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2103"
]
},
"num": null,
"urls": [],
"raw_text": "Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that re- quire simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 650-655, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Probing linguistic systematicity",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Goodwin",
"suffix": ""
},
{
"first": "Koustuv",
"middle": [],
"last": "Sinha",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"J"
],
"last": "O'donnell",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Goodwin, Koustuv Sinha, and Timothy J. O'Donnell. 2020. Probing linguistic systematicity.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Annotation artifacts in natural language inference data",
"authors": [
{
"first": "Swabha",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "107--112",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2017"
]
},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural lan- guage inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112, New Orleans, Louisiana. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Designing and interpreting probes with control tasks",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hewitt",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2733--2743",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1275"
]
},
"num": null,
"urls": [],
"raw_text": "John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Natural language inference with monotonicity",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Moss",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Conference on Computational Semantics -Short Papers",
"volume": "",
"issue": "",
"pages": "8--15",
"other_ids": {
"DOI": [
"10.18653/v1/W19-0502"
]
},
"num": null,
"urls": [],
"raw_text": "Hai Hu, Qi Chen, and Larry Moss. 2019a. Natu- ral language inference with monotonicity. In Pro- ceedings of the 13th International Conference on Computational Semantics -Short Papers, pages 8- 15, Gothenburg, Sweden. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "MonaLog: A lightweight system for natural language inference based on monotonicity",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "Atreyee",
"middle": [],
"last": "Mukherjee",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"S"
],
"last": "Moss",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Hu, Qi Chen, Kyle Richardson, Atreyee Mukher- jee, Lawrence S. Moss, and Sandra K\u00fcbler. 2019b. MonaLog: A lightweight system for natural lan- guage inference based on monotonicity. ArXiv, abs/1910.08772.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Analysing the potential of seqto-seq models for incremental interpretation in taskoriented dialogue",
"authors": [
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Sanne",
"middle": [],
"last": "Bouwmeester",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "165--174",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5419"
]
},
"num": null,
"urls": [],
"raw_text": "Dieuwke Hupkes, Sanne Bouwmeester, and Raquel Fern\u00e1ndez. 2018. Analysing the potential of seq- to-seq models for incremental interpretation in task- oriented dialogue. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and In- terpreting Neural Networks for NLP, pages 165-174, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Compositionality decomposed: how do neural networks generalise?",
"authors": [
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
},
{
"first": "Verna",
"middle": [],
"last": "Dankers",
"suffix": ""
},
{
"first": "Mathijs",
"middle": [],
"last": "Mul",
"suffix": ""
},
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. 2019. Compositionality decomposed: how do neural networks generalise?",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A monotonicity calculus and its completeness",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Icard",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Moss",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Tune",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Meeting on the Mathematics of Language",
"volume": "",
"issue": "",
"pages": "75--87",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3408"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Icard, Lawrence Moss, and William Tune. 2017. A monotonicity calculus and its completeness. In Proceedings of the 15th Meeting on the Mathemat- ics of Language, pages 75-87, London, UK. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Inclusion and exclusion in natural language",
"authors": [
{
"first": "Thomas",
"middle": [
"F"
],
"last": "Icard",
"suffix": ""
}
],
"year": 2012,
"venue": "Studia Logica",
"volume": "100",
"issue": "4",
"pages": "705--725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas F. Icard. 2012. Inclusion and exclusion in nat- ural language. Studia Logica, 100(4):705-725.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "From programs to causal models",
"authors": [
{
"first": "Thomas",
"middle": [
"F"
],
"last": "Icard",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 21st Amsterdam Colloquium",
"volume": "",
"issue": "",
"pages": "35--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas F. Icard. 2017. From programs to causal mod- els. In Proceedings of the 21st Amsterdam Collo- quium, pages 35-44. University of Amsterdam.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Recent progress on monotonicity",
"authors": [
{
"first": "F",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"S"
],
"last": "Icard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moss",
"suffix": ""
}
],
"year": 2013,
"venue": "Linguistic Issues in Language Technology",
"volume": "9",
"issue": "",
"pages": "1--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas F. Icard and Lawrence S. Moss. 2013. Recent progress on monotonicity. Linguistic Issues in Lan- guage Technology, 9(7):1-31.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Adversarial examples for evaluating reading comprehension systems",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2017,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. CoRR, abs/1707.07328.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks",
"authors": [
{
"first": "M",
"middle": [],
"last": "Brenden",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Lake",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning",
"volume": "80",
"issue": "",
"pages": "2879--2888",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brenden M. Lake and Marco Baroni. 2018. General- ization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2879-2888. PMLR.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Inoculation by fine-tuning: A method for analyzing challenge datasets",
"authors": [
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2171--2179",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1225"
]
},
"num": null,
"urls": [],
"raw_text": "Nelson F. Liu, Roy Schwartz, and Noah A. Smith. 2019. Inoculation by fine-tuning: A method for analyz- ing challenge datasets. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2171-2179, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Natural Language Inference",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill MacCartney. 2009. Natural Language Inference. Ph.D. thesis, Stanford University.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Modeling semantic containment and exclusion in natural language inference",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "521--528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill MacCartney and Christopher D. Manning. 2008. Modeling semantic containment and exclusion in natural language inference. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 521-528, Manch- ester, UK. Coling 2008 Organizing Committee.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "An extended model of natural logic",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Eight International Conference on Computational Semantics",
"volume": "",
"issue": "",
"pages": "140--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill MacCartney and Christopher D. Manning. 2009. An extended model of natural logic. In Proceed- ings of the Eight International Conference on Com- putational Semantics, pages 140-156, Tilburg, The Netherlands. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Vision: A Computational Investigation into the Human Representation and Processing of Visual Information",
"authors": [
{
"first": "David",
"middle": [],
"last": "Marr",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Marr. 1982. Vision: A Computational Investi- gation into the Human Representation and Process- ing of Visual Information. Henry Holt and Co., Inc., New York, NY, USA.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Natural logic and semantics",
"authors": [
{
"first": "S",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moss",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 18th Amsterdam Colloquium: Revised Selected Papers",
"volume": "",
"issue": "",
"pages": "71--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence S Moss. 2009. Natural logic and semantics. In Proceedings of the 18th Amsterdam Colloquium: Revised Selected Papers, pages 71-80, Berlin. Uni- versity of Amsterdam, Springer.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Stress test evaluation for natural language inference",
"authors": [
{
"first": "Aakanksha",
"middle": [],
"last": "Naik",
"suffix": ""
},
{
"first": "Abhilasha",
"middle": [],
"last": "Ravichander",
"suffix": ""
},
{
"first": "Norman",
"middle": [],
"last": "Sadeh",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Rose",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2340--2353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340-2353, Santa Fe, New Mexico, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Analyzing compositionality-sensitivity of NLI models",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Yicheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6867--6874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Yicheng Wang, and Mohit Bansal. 2019a. Analyzing compositionality-sensitivity of NLI mod- els. In Proceedings of the AAAI Conference on Arti- ficial Intelligence, volume 33, pages 6867-6874.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Adversarial NLI: A new benchmark for natural language understanding",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2019b. Adversarial NLI: A new benchmark for natural lan- guage understanding.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Most \"babies\" are \"little\" and most \"problems\" are \"huge\": Compositional entailment in adjective-nouns",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2164--2173",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1204"
]
},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick and Chris Callison-Burch. 2016. Most \"babies\" are \"little\" and most \"problems\" are \"huge\": Compositional entailment in adjective-nouns. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2164-2173, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Direct and indirect effects",
"authors": [
{
"first": "Judea",
"middle": [],
"last": "Pearl",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, UAI'01",
"volume": "",
"issue": "",
"pages": "411--420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Judea Pearl. 2001. Direct and indirect effects. In Proceedings of the Seventeenth Conference on Un- certainty in Artificial Intelligence, UAI'01, page 411-420, San Francisco, CA, USA. Morgan Kauf- mann Publishers Inc.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Dissecting contextual word embeddings: Architecture and representation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1499--1509",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1179"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 1499-1509, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Probing natural language inference models through semantic fragments",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"S"
],
"last": "Moss",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyle Richardson, Hai Hu, Lawrence S. Moss, and Ashish Sabharwal. 2019. Probing natural language inference models through semantic fragments.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Studies in Natural Logic and Categorial Grammar",
"authors": [
{
"first": "V\u00edctor",
"middle": [],
"last": "S\u00e1nchez",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Valencia",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V\u00edctor S\u00e1nchez-Valencia. 1991. Studies in Natural Logic and Categorial Grammar. Ph.D. thesis, Uni- versity of Amsterdam.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "olmpics -on what language model pre-training captures",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Talmor",
"suffix": ""
},
{
"first": "Yanai",
"middle": [],
"last": "Elazar",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2019. olmpics -on what language model pre-training captures.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "BERT rediscovers the classical NLP pipeline",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4593--4601",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1452"
]
},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Causal mediation analysis for interpreting neural nlp: The case of gender bias",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Vig",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Nevo",
"suffix": ""
},
{
"first": "Yaron",
"middle": [],
"last": "Singer",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Shieber",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stuart Shieber. 2020. Causal mediation analysis for inter- preting neural nlp: The case of gender bias.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Do neural models learn systematicity of monotonicity inference in natural language?",
"authors": [
{
"first": "Hitomi",
"middle": [],
"last": "Yanaka",
"suffix": ""
},
{
"first": "Koji",
"middle": [],
"last": "Mineshima",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Bekki",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, and Kentaro Inui. 2020. Do neural models learn sys- tematicity of monotonicity inference in natural lan- guage?",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Can neural networks understand monotonicity reasoning?",
"authors": [
{
"first": "Hitomi",
"middle": [],
"last": "Yanaka",
"suffix": ""
},
{
"first": "Koji",
"middle": [],
"last": "Mineshima",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Bekki",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Lasha",
"middle": [],
"last": "Abzianidze",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "31--40",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4804"
]
},
"num": null,
"urls": [],
"raw_text": "Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Ken- taro Inui, Satoshi Sekine, Lasha Abzianidze, and Jo- han Bos. 2019a. Can neural networks understand monotonicity reasoning? In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 31-40, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "HELP: A dataset for identifying shortcomings of neural models in monotonicity reasoning",
"authors": [
{
"first": "Hitomi",
"middle": [],
"last": "Yanaka",
"suffix": ""
},
{
"first": "Koji",
"middle": [],
"last": "Mineshima",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Bekki",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Lasha",
"middle": [],
"last": "Abzianidze",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)",
"volume": "",
"issue": "",
"pages": "250--255",
"other_ids": {
"DOI": [
"10.18653/v1/S19-1027"
]
},
"num": null,
"urls": [],
"raw_text": "Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Ken- taro Inui, Satoshi Sekine, Lasha Abzianidze, and Jo- han Bos. 2019b. HELP: A dataset for identifying shortcomings of neural models in monotonicity rea- soning. In Proceedings of the Eighth Joint Con- ference on Lexical and Computational Semantics (*SEM 2019), pages 250-255, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Results where classifier probes are trained on BERT representations to predict the value of lexrel and the output of INFER"
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "presents the simple algorithm INFER, which is our learning target. It takes in a MoNLI example and stores the lexical entailment relation between the substituted words in the variable lexrel . If negation is present, the reverse of lexrel is returned; if there is no negation, lexrel itself is returned. This is simply an algorithmic description of the MoNLI construction method. The most important piece is the intermediate variable lexrel ."
},
"TABREF0": {
"text": "MoNLI as a Challenge Test SetWe first use MoNLI as a challenge test dataset, i.e., models trained only on SNLI are expected to generalize to MoNLI. MoNLI can be considered a",
"type_str": "table",
"content": "<table><tr><td>Model CBOW BiLSTM ESIM ESIM ESIM BERT BERT BERT</td><td>Input pretraining NLI train data GloVe SNLI train GloVe SNLI train GloVe SNLI train GloVe BERT SNLI train BERT</td><td>5 Behavioral Evaluations With NMoNLI fine-tuning SNLI PMoNLI NMoNLI SNLI NMoNLI 78.9 64.6 22.9 65.9 95.5 81.6 73.2 37.9 74.6 93.5 87.9 86.6 39.4 56.9 96.2 ----98.0 ----35.5 90.8 94.4 2.2 90.5 90.0 ----96.7 5.1 No MoNLI fine-tuning ----62.3</td></tr></table>",
"num": null,
"html": null
},
"TABREF1": {
"text": "The results of our behavioral analysis. The columns labeled No MoNLI fine-tuning display the challenge test set results (Section 5.1), and the columns labeled With MoNLI fine-tuning display systematic generalization task results (Section 5.2). The numbers are accuracy values; all the datasets have balanced label distributions. Dashes mark experiments that would involve untrained NLI parameters due to training/fine-tuning set-up.",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF2": {
"text": "under the heading 'No MoNLI fine-tuning', and they are stark. The four models achieve comparably high accuracies on SNLI and PMoNLI, the examples where no downward monotone operators scope over the substitution site. However, they are well below chance accuracy on NMoNLI, the examples where not scopes over the substitution site. BERT is more extreme than the other models, achieving a higher accuracy on PMoNLI than SNLI and almost zero accuracy on NMoNLI. High performance on PMoNLI shows that models have knowledge of the lexical relations between the substituted words, but low performance on NMoNLI shows the models have no knowledge of the downward monotone nature of not. In fact, the below chance accuracy on NMoNLI indicates that these models are somewhat reliably (incredibly reliably in BERT's case) predicting the wrong label on these examples, suggesting that they treat NMoNLI examples the same as PMoNLI examples.",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
}
}
}
}