| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:39:19.206928Z" |
| }, |
| "title": "MonaLog: a Lightweight System for Natural Language Inference Based on Monotonicity", |
| "authors": [ |
| { |
| "first": "Hai", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "huhai@indiana.edu" |
| }, |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Kyle", |
| "middle": [], |
| "last": "Richardson", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "kyler@allenai.org" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present a new logic-based inference engine for natural language inference (NLI) called MonaLog, which is based on natural logic and the monotonicity calculus. In contrast to existing logic-based approaches, our system is intentionally designed to be as lightweight as possible, and operates using a small set of well-known (surface-level) monotonicity facts about quantifiers, lexical items and tokenlevel polarity information. Despite its simplicity, we find our approach to be competitive with other logic-based NLI models on the SICK benchmark. We also use MonaLog in combination with the current state-of-the-art model BERT in a variety of settings, including for compositional data augmentation. We show that MonaLog is capable of generating large amounts of high-quality training data for BERT, improving its accuracy on SICK.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present a new logic-based inference engine for natural language inference (NLI) called MonaLog, which is based on natural logic and the monotonicity calculus. In contrast to existing logic-based approaches, our system is intentionally designed to be as lightweight as possible, and operates using a small set of well-known (surface-level) monotonicity facts about quantifiers, lexical items and tokenlevel polarity information. Despite its simplicity, we find our approach to be competitive with other logic-based NLI models on the SICK benchmark. We also use MonaLog in combination with the current state-of-the-art model BERT in a variety of settings, including for compositional data augmentation. We show that MonaLog is capable of generating large amounts of high-quality training data for BERT, improving its accuracy on SICK.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "There has been rapid progress on natural language inference (NLI) in the last several years, due in large part to recent advances in neural modeling (Conneau et al., 2017) and the introduction of several new large-scale inference datasets (Marelli et al., 2014; Bowman et al., 2015; Williams et al., 2018; Khot et al., 2018) . Given the high performance of current state-of-the-art models, there has also been interest in understanding the limitations of these models (given their uninterpretability) (Naik et al., 2018; McCoy et al., 2019) , as well as finding systematic biases in benchmark datasets (Gururangan et al., 2018; Poliak et al., 2018) .", |
| "cite_spans": [ |
| { |
| "start": 149, |
| "end": 171, |
| "text": "(Conneau et al., 2017)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 239, |
| "end": 261, |
| "text": "(Marelli et al., 2014;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 262, |
| "end": 282, |
| "text": "Bowman et al., 2015;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 283, |
| "end": 305, |
| "text": "Williams et al., 2018;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 306, |
| "end": 324, |
| "text": "Khot et al., 2018)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 501, |
| "end": 520, |
| "text": "(Naik et al., 2018;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 521, |
| "end": 540, |
| "text": "McCoy et al., 2019)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 602, |
| "end": 627, |
| "text": "(Gururangan et al., 2018;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 628, |
| "end": 648, |
| "text": "Poliak et al., 2018)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In parallel to these efforts, there have also been recent logic-based approaches to NLI (Mineshima et al., 2015; Mart\u00ednez-G\u00f3mez et al., 2016; Mart\u00ednez-G\u00f3mez et al., 2017; Abzianidze, 2017; Yanaka et al., 2018) , which take inspiration from linguistics. In contrast to early attempts at using logic (Bos and Markert, 2005) , these approaches have proven to be more robust. However they tend to use many rules and their output can be hard to interpret. It is sometimes unclear whether the attendant complexity is justified, especially given that such models are currently far outpaced by data-driven models and are generally hard to hybridize with data-driven techniques.", |
| "cite_spans": [ |
| { |
| "start": 88, |
| "end": 112, |
| "text": "(Mineshima et al., 2015;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 113, |
| "end": 141, |
| "text": "Mart\u00ednez-G\u00f3mez et al., 2016;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 142, |
| "end": 170, |
| "text": "Mart\u00ednez-G\u00f3mez et al., 2017;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 171, |
| "end": 188, |
| "text": "Abzianidze, 2017;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 189, |
| "end": 209, |
| "text": "Yanaka et al., 2018)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 298, |
| "end": 321, |
| "text": "(Bos and Markert, 2005)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this work, we introduce a new logical inference engine called MonaLog, which is based on natural logic and work on monotonicity stemming from van Benthem (1986) . In contrast to the logical approaches cited above, our starting point is different in that we begin with the following two questions: 1) what is the simplest logical system that one can come up with to solve empirical NLI problems (i.e., the system with minimal amounts of primitives and background knowledge)?; and 2) what is the lower-bound performance of such a model? Like other approaches to natural logic Angeli and Manning, 2014) , our model works by reasoning over surface forms (as opposed to translating to symbolic representations) using a small inventory of monotonicity facts about quantifiers, lexical items and token-level polarity (Hu and Moss, 2018) ; proofs in the calculus are hence fully interpretable and expressible in ordinary language. Unlike existing work on natural logic, however, our model avoids the need for having expensive alignment and search sub-procedures Stern and Dagan, 2011) , and relies on a much smaller set of background knowledge and primitive relations than MacCartney and Manning (2009) .", |
| "cite_spans": [ |
| { |
| "start": 149, |
| "end": 163, |
| "text": "Benthem (1986)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 577, |
| "end": 602, |
| "text": "Angeli and Manning, 2014)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 813, |
| "end": 832, |
| "text": "(Hu and Moss, 2018)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 1057, |
| "end": 1079, |
| "text": "Stern and Dagan, 2011)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 1168, |
| "end": 1197, |
| "text": "MacCartney and Manning (2009)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To show the effectiveness of our approach, we show results on the SICK dataset (Marelli et al., 2014) , a common benchmark for logic-based NLI, and find MonaLog to be competitive with more complicated logic-based approaches (many Polarity/Arrow tagging All \u00d2 schoolgirls \u00d3 are \u00d2 on \u00d2 the \u00d2 train \" Figure 1 : An illustration of our general monotonicity reasoning pipeline using an example premise and hypothesis pair: All schoolgirls are on the train and All happy schoolgirls are on the train.", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 101, |
| "text": "(Marelli et al., 2014)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 298, |
| "end": 306, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "of which require full semantic parsing and more complex logical machinery). We also introduce a supplementary version of SICK that corrects several common annotation mistakes (e.g., asymmetrical inference annotations) based on previous work by Kalouli et al. (2017 Kalouli et al. ( , 2018 1 . Positive results on both these datasets show the ability of lightweight monotonicity models to handle many of the inferences found in current NLI datasets, hence putting a more reliable lower-bound on what results the simplest logical approach is capable of achieving on this benchmark. Since our logic operates over surface forms, it is straightforward to hybridize our models. We investigate using MonaLog in combination with the language model BERT (Devlin et al., 2019) , including for compositional data augmentation, i.e, re-generating entailed versions of examples in our training sets. To our knowledge, our approach is the first attempt to use monotonicity for data augmentation, and we show that such augmentation can generate high-quality training data with which models like BERT can improve performance.", |
| "cite_spans": [ |
| { |
| "start": 244, |
| "end": 264, |
| "text": "Kalouli et al. (2017", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 265, |
| "end": 288, |
| "text": "Kalouli et al. ( , 2018", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 745, |
| "end": 766, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The goal of NLI is to determine, given a premise set P and a hypothesis sentence H, whether H follows from the meaning of P (Dagan et al., 2005) . In this paper, we look at single-premise problems that involve making a standard 3-way classification decision (i.e., Entailment (H), Contradict (C) and Neutral (N)). Our general monotonicity reasoning system works according to the pipeline in Figure 1 . Given a premise text, we first do Arrow Tagging by assigning polarity annotations (i.e., the arrows \u00d2, \u00d3, which are the basic primitives of our logic) to tokens in text. These surface-level annotations, in turn, are associated with a set of natural logic inference rules that provide instructions for how to generate entailments and contradictions by span replacements over these arrows (which relies on a library of span replacement rules). For example, in the sentence All schoolgirls are on the train, the token schoolgirls is associated with a polarity annotation \u00d3, which indicates that in this sentential context, the span schoolgirls can be replaced with a semantically more specific concept (e.g., happy schoolgirls) in order to generate an entailment. A generation and search procedure is then applied to see if the hypothesis text can be generated from the premise using these inference rules. A proof in this model is finally a particular sequence of edits (e.g., see Figure 2 ) that derive the hypothesis text from the premise text rules and yield an entailment or contradiction.", |
| "cite_spans": [ |
| { |
| "start": 124, |
| "end": 144, |
| "text": "(Dagan et al., 2005)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 391, |
| "end": 399, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 1381, |
| "end": 1389, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Our System: MonaLog", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In the following sections, we provide the details of our particular implementation of these different components in MonaLog.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our System: MonaLog", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Given an input premise P , MonaLog first polarizes each of its tokens and constituents, calling the system described by Hu and Moss (2018) 2 , which performs polarization on a CCG parse tree. For example, a polarized P could be every \u00d2 linguist \u00d3 swim \u00d2 . Note that since we ignore morphology in the system, tokens are represented by lemmas.", |
| "cite_spans": [ |
| { |
| "start": 120, |
| "end": 138, |
| "text": "Hu and Moss (2018)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polarization (Arrow Tagging)", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "MonaLog utilizes two auxiliary sets. First, a knowledge base K that stores the world knowledge needed for inference, e.g., semanticist \u00a7 linguist and swim \u00a7 move, which captures the facts that rrsemanticistss denotes a subset of rrlinguistss, and that rrswimss denotes a subset of rrmovess, respectively. Such world knowledge can be created manually for the problem at hand, or derived easily from existing resources such as WordNet (Miller, 1995) . Note that we do not blindly add all relations from WordNet to our knowledge base, since this would hinge heavily on word sense disambiguation (we need to know whether the \"bank\" is a financial institution or a river bank to extract its relations correctly). In the current implementation, we avoid this by adding x \u00a7 y or x K 3 y relations only if both x and y are words in the premise-hypothesis pair. 4 Additionally, some relations that involve quantifiers and prepositions need to be hard-coded, since WordNet does not include them: every \" all \" each \u00a7 most \u00a7 many \u00a7 a few \" several \u00a7 some \" a; the \u00a7 some \" a; on K off ; up K down; etc.", |
| "cite_spans": [ |
| { |
| "start": 433, |
| "end": 447, |
| "text": "(Miller, 1995)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 853, |
| "end": 854, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Knowledge Base K and Sentence Base S", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We also need to keep track of relations that can potentially be derived from the P -H sentence pair. For instance, for all adjectives and nouns that appear in the sentence pair, it is easy to obtain: adj + n \u00a7 n (black cat \u00a7 cat). Similarly, we have n + PP/relative clause \u00a7 n (friend in need \u00a7 friend, dog that bites \u00a7 dog), VP + advP/PP \u00a7 VP (dance happily/in the morning \u00a7 dance), and so on. We also have rules that extract pieces of knowledge from P directly, e.g.: n 1 \u00a7 n 2 from sentences of the pattern every n 1 is a n 2 . One can also connect MonaLog to bigger knowledge graphs or ontologies such as DBpedia.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Knowledge Base K and Sentence Base S", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "A sentence base S, on the other hand, stores the generated entailments and contradictions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Knowledge Base K and Sentence Base S", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Once we have a polarized CCG tree, and some \u00a7 relations in K, generating entailments and contradictions is fairly straightforward. A concrete example is given in Figure 2 . Note that the generated \u00a7 instances are capable of producing mostly monotonicity inferences, but MonaLog can be extended to include other more complex inferences in natural logic, hence the name MonaLog. This extension is addressed in more detail in .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 162, |
| "end": 170, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Generation", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Entailments/inferences The key operation for generating entailments is replacement, or substitution. It can be summarized as follows: 1) For upward-entailing (UE) words/constituents, replace them with words/constituents that denote bigger sets. 2) For downward-entailing (DE) words/constituents, either replace them with those denoting smaller sets, or add modifiers (adjectives, adverbs and/or relative clauses) to create a smaller set. Thus for every \u00d2 linguist \u00d3 swim \u00d2 , MonaLog can produce the following three entailments by replacing each word with the appropriate word from K: most \u00d2 linguist \u00d3 swim \u00d2 , every \u00d2 semanticist \u00d3 swim \u00d2 and every \u00d2 linguist \u00d3 move \u00d2 . These are results of one replacement.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generation", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Performing replacement for multiple rounds/depths can easily produce many more entailments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generation", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Contradictory sentences To generate sentences contradictory to the input sentence, we do the following: 1) if the sentence starts with \"no (some)\", replace the first word with \"some (no)\". 2) If the object is quantified by \"a/some/the/every\", change the quantifier to \"no\", and vice versa. 3) Negate the main verb or remove the negation. See examples in Figure 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 354, |
| "end": 362, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Generation", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Neutral sentences MonaLog returns Neutral if it cannot find the hypothesis H in S.entailments or S.contradictions. Thus, there is no need to generate neutral sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generation", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Now that we have a set of inferences and contradictions stored in S, we can simply see if the hypothesis is in either one of the sets by comparing the strings. If yes, then return Entailment or Contradiction; if not, return Neutral, as schematically shown in Figure 2 . However, the exact-stringmatch method is too brittle. Therefore, we apply a heuristic. If the only difference between sentences S 1 and S 2 is in the set {\"a\", \"be\", \"ing\"}, then S 1 and S 2 are considered semantically equivalent.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 259, |
| "end": 267, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Search", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "The search is implemented using depth first search, with a default depth of 2, i.e. at most 2 replacements for each input sentence. At each node, MonaLog \"expands\" the sentence (i.e., an entailment of its parent) by obtaining its entailments and contradictions, and checks whether H is in either set. If so, the search is terminated; otherwise the systems keeps searching until all the possible entailments and contradictions up to depth 2 have been visited.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Search", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "P : A \u00d2 schoolgirl \u00d2 with \u00d2 a \u00d2 black \u00d2 bag \u00d2 is \u00d2 on \u00d2 a \u00d2 crowded \u00d2 train \u00d2 A \u00d2 girl \u00d2 with \u00d2 a \u00d2 black \u00d2 bag \u00d2 is \u00d2 on \u00d2 a \u00d2 crowded \u00d2 train \u00d2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Search", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "A girl is on a crowded train A girl is on a train Figure 2 : Example search tree for SICK 340, where P is A schoolgirl with a black bag is on a crowded train, with the H: A girl with a black bag is on a crowded train. Only one replacement is allowed at each step. Sentences at the nodes are generated entailments. Sentences in rectangles are the generated contradictions. In this case our system will return entail. The search will terminate after reaching the H in this case, but for illustrative purposes, we show entailments of depth up to 3. To exclude the influence of morphology, all sentences are represented at the lemma level in MonaLog, which is not shown here.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 50, |
| "end": 58, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Search", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "A \u00d2 schoolgirl \u00d2 with \u00d2 a \u00d2 bag \u00d2 is \u00d2 on \u00d2 a \u00d2 crowded \u00d2 train \u00d2 ... ... A \u00d2 schoolgirl \u00d2 is \u00d2 on \u00d2 a \u00d2 crowded \u00d2 train \u00d2 ... ...", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Search", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "We perform two experiments to test MonaLog. We first use MonaLog to solve the problems in a commonly used natural language inference dataset, SICK (Marelli et al., 2014) , comparing our results with previous systems. Second, we test the quality of the data generated by MonaLog. To do this, we generate more training data (sentence pairs) from the SICK training data using our system, and performe fine-tuning on BERT (Devlin et al., 2019 ), a language model based on the transformer architecture (Vaswani et al., 2017) , with the expanded dataset. In all experiments, we use the Base, Uncased model of BERT 5 .", |
| "cite_spans": [ |
| { |
| "start": 147, |
| "end": 169, |
| "text": "(Marelli et al., 2014)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 418, |
| "end": 438, |
| "text": "(Devlin et al., 2019", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 497, |
| "end": 519, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MonaLog and SICK", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The SICK (Marelli et al., 2014) dataset includes around 10,000 English sentence pairs that are annotated to have either \"Entailment\", \"Neutral\" or \"Contradictory\" relations. We choose SICK as our testing ground for several reasons. First, we want to test on a large-scale dataset, since we have shown that a similar model reaches good results on parts of the smaller Fra-CaS dataset (Cooper et al., 1996) . Second, we want to make our results comparable to those of previous logic-based models such as the ones described in (Bjerva et al., 2014; Abzianidze, 2015; Mart\u00ednez-G\u00f3mez et al., 2017; Yanaka et al., 2018) , which were also tested on SICK. We use the data split provided in the dataset: 4,439 training problems, 4,906 test problems and 495 trial problems, 5 https://github.com/google-research/bert see Table 1 for examples.", |
| "cite_spans": [ |
| { |
| "start": 9, |
| "end": 31, |
| "text": "(Marelli et al., 2014)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 383, |
| "end": 404, |
| "text": "(Cooper et al., 1996)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 524, |
| "end": 545, |
| "text": "(Bjerva et al., 2014;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 546, |
| "end": 563, |
| "text": "Abzianidze, 2015;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 564, |
| "end": 592, |
| "text": "Mart\u00ednez-G\u00f3mez et al., 2017;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 593, |
| "end": 613, |
| "text": "Yanaka et al., 2018)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 764, |
| "end": 765, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 810, |
| "end": 817, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The SICK Dataset", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "There are numerous issues with the original SICK dataset, as illustrated by Kalouli et al. (2017 Kalouli et al. ( , 2018 . They first manually checked 1,513 pairs tagged as \"A entails B but B is neutral to A\" (AeBBnA) in the original SICK, correcting 178 pairs that they considered to be wrong (Kalouli et al., 2017) . Later, Kalouli et al. (2018) extracted pairs from SICK whose premise and hypothesis differ in only one word, and created a simple rule-based system that used WordNet information to solve the problem. Their WordNet-based method was able to solve 1,651 problems, whose original labels in SICK were then manually checked and corrected against their system's output. They concluded that 336 problems are wrongly labeled in the original SICK. Combining the above two corrected subsets of SICK, minus the overlap, results in their corrected SICK dataset 6 , which has 3,016 problems (3/10 of the full SICK), with 409 labels different from the original SICK (see breakdown in Table 2 ). 16 of the corrections are in the trial set, 197 of them in the training set and 196 in the test set. This suggests that more than one out of ten problems in SICK are potentially problematic. For this reason, two authors of the current paper checked the 409 changes. We found that only 246 problems are labeled the same by our team and by Kalouli et al. (2018) . For cases where there is disagreement, we adjudicated the differences after a discussion.", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 96, |
| "text": "Kalouli et al. (2017", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 97, |
| "end": 120, |
| "text": "Kalouli et al. ( , 2018", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 294, |
| "end": 316, |
| "text": "(Kalouli et al., 2017)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 326, |
| "end": 347, |
| "text": "Kalouli et al. (2018)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1337, |
| "end": 1358, |
| "text": "Kalouli et al. (2018)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 988, |
| "end": 995, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Hand-corrected SICK", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We are aware that the partially checked SICK (by two teams) is far from ideal. We therefore present results for two versions of SICK for experiment 1 (section 4): the original SICK and the version corrected by our team. For the data augmentation experiment in section 5, we only performed fine-tuning on the corrected SICK. As shown in a recent SICK annotation experiment by Kalouli et al. (2019) , annotation is a complicated issue influenced by linguistic and non-linguistic factors. We leave checking the full SICK dataset to future work.", |
| "cite_spans": [ |
| { |
| "start": 375, |
| "end": 396, |
| "text": "Kalouli et al. (2019)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hand-corrected SICK", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "4 Experiment 1: Using MonaLog Directly", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hand-corrected SICK", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The goal of experiment 1 is to test how accurately MonaLog solves problems in a large-scale dataset. We first used the system to solve the 495 problems in the trial set and then manually identified the cases in which the system failed. Then we determined which syntactic transformations are needed for MonaLog. After improving the results on the trial data by introducing a preprocessing step to handle limited syntactic variation (see below), we applied MonaLog on the test set. This means that the rule base of the system was optimized on the trial data, and we can test its generalization capability on the test data. The main obstacle for MonaLog is the syntactic variations in the dataset, illustrated in some examples in Table 1 . There exist multiple ways of dealing with these variations: One approach is to 'normalize' unknown syntactic structures to a known structure. For example, we can transform passive sentences into active ones and convert existential sentences into the base form (see ex. 8399 and 219 in Table 1 ). Another approach is to use some more abstract syntactic/semantic representation so that the linear word order can largely be ignored, e.g., represent a sentence by its dependency parse, or use Abstract Meaning Representation. Here, we explore the first option and leave the second approach to future work. We believe that dealing with a wide range of syntactic variations requires tools designed specifically for that purpose. The goal of MonaLog is to generate entailments and contradictions based on a polarized sentence instead. Below, we list the most important syntactic transformations we perform in preprocessing 7 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 727, |
| "end": 734, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 1022, |
| "end": 1029, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Setup and Preprocessing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "1. Convert all passive sentences to active using pass2act 8 . If the passive does not contain a by phrase, we add by a person.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setup and Preprocessing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "form (see ex. 219 in Table 1 ). 3. Other transformations: someone/anyone/no one \u00d1 some/any/no person; there is no man doing sth. \u00d1 no man is doing sth.; etc.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 21, |
| "end": 28, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Convert existential clauses into their base", |
| "sec_num": "2." |
| }, |
| { |
| "text": "The results of our system on uncorrected and corrected SICK are presented in Table 3 , along with comparisons with other systems. Our accuracy on the uncorrected SICK (77.19%) is much higher than the majority baseline (56.36%) or the hypothesis-only baseline (56.87%) reported by Poliak et al. (2018) , and only several points lower than current logic-based systems. Since our system is based on natural logic, there is no need for translation into logical forms, which makes the reasoning steps transparent and much easier to interpret. I.e., with entailments and contradictions, we can generate a natural language trace of the system, see Fig. 2 .", |
| "cite_spans": [ |
| { |
| "start": 280, |
| "end": 300, |
| "text": "Poliak et al. (2018)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 77, |
| "end": 84, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 641, |
| "end": 647, |
| "text": "Fig. 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Our results on the corrected SICK are even higher (see lower part of Table 3 ), demonstrating the effect of data quality on the final results. Note that with some simple syntactic transformations we can gain 1-2 points in accuracy. Table 4 shows MonaLog's performance on the individual relations. The system is clearly very good at identifying entailments and contradictions, as demonstrated by the high precision values, especially on the corrected SICK set (98.50 precision for E and 95.02 precision for C). The lower recall values are due to MonaLog's current inability to handle syntactic variation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 69, |
| "end": 76, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 232, |
| "end": 239, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Based on these results, we tested a hybrid model of MonaLog and BERT (see Table 3 ) where we exploit MonaLog's strength: Since MonaLog has a very high precision on Entailment and Contradiction, we can always trust MonaLog if it predicts E or C; when it returns N, we then fall back to BERT. This hybrid model improves the accuracy of BERT by 1% absolute to 85.95% on the corrected SICK. On the uncorrected SICK dataset, the hybrid system performs worse than BERT. Since MonaLog is optimized for the corrected SICK, it may mislabel many E and C judgments in the uncorrected dataset. The stand-alone BERT system performs better on the uncorrected data (86.74%) than the corrected set (85.00%). The corrected set may be too inconsistent since only a part has been checked.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 74, |
| "end": 81, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Overall, these hybird results show that it is possible to combine our high-precision system with deep learning architectures. However, more work is necessary to optimize this combined system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Upon closer inspection, some of MonaLog's errors consist of difficult cases, as shown in Table 5 . For example, in ex. 359, if our knowledge base K contains the background fact chasing \u00a7 running, then MonaLog's judgment of C would be correct. In ex. 1402, if crying means screaming, then the label should be E; however, if crying here means shedding tears, then the label should probably be N. Here we also see potentially problematic labels (ex. 1760, 3403) in the original SICK dataset.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 89, |
| "end": 96, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Another point of interest is that 19 of Mona-Log's mistakes are related to the antonym pair man vs. woman (e.g., ex. 5793 in Table 5 ). This points to inconsistency of the SICK dataset: Whereas there are at least 19 cases tagged as Neutral (e.g., ex. 5793), there are at least 17 such pairs that are annotated as Contradictions in the test set (e.g., ex. 3521), P: A man is dancing, H: A woman is dancing (ex. 9214), P: A shirtless man is jumping over a log, H: A shirtless woman is jumping over a log. If man and woman refer to the same entity, then clearly that entity cannot be man and woman at the same time, which makes the sentence pair a contradiction. If, however, they do not refer to the same entity, then they should be Neutral. A woman is slicing a fish N n.a. C Table 5 : Examples of incorrect answers by MonaLog; n.a. = the problem has not been checked in corr. SICK.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 125, |
| "end": 132, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 775, |
| "end": 782, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Our second experiment focuses on using Mona-Log to generate additional training data for machine learning models such as BERT. To our knowledge, this is the first time that a rule-based NLI system has been successfully used to generate training data for a deep learning application.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment 2: Data Generation Using MonaLog", |
| "sec_num": "5" |
| }, |
| { |
| "text": "As described above, MonaLog generates entailments and contradictions when solving problems. These can be used as additional training data for a machine learning model. I.e., we pair the newly generated sentences with their input sentence, creating new pairs for training. For example, we take all the sentences in the nodes in Figure 2 as inferences and all the sentences in rectangles as contradictions, and then form sentence pairs with the input sentence. The additional data can be used directly, almost without human intervention. Thus for experiment 2, the goal is to examine the quality of these generated sentence pairs. For this, we re-train a BERT model on these pairs. If BERT trained on the manually annotated SICK training data is improved by adding data generated by MonaLog, then we can conclude that the gen-erated data is of high quality, even comparable to human annotated data, which is what we found.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 327, |
| "end": 335, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "More specifically, we compare the performance of BERT models trained on a) SICK training data alone, and b) SICK training data plus the entailing and contradictory pairs generated by Mona-Log. All experiments are carried out using our corrected version of the SICK data set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "However, note that MonaLog is designed to only generate entailments and contradictions. Thus, we only have access to newly generated examples for those two cases, we do not acquire any additional neutral cases. Consequently, adding these examples to the training data will introduce a skewing that does not reflect the class distribution in the test set. Since this will bias the machine learner against neutral cases, we use the following strategy to counteract that tendency: We relabel all cases where BERT is not confident enough for either E or C into N. We set this threshold to 0.95 but leave further optimization of the threshold to future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setup", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "MonaLog is prone to over-generation. For example, it may wrongly add the same adjective before a noun (phrase) twice to create a more specific noun, e.g., young young man \u00a7 young man \u00a7 man. Since it is possible that such examples influence the machine learning model negatively, we look into filtering such examples to improve the quality of the additional training data. We manually inspected 100 sentence pairs generated by MonaLog to check the quality and naturalness of the new sentences (see Table 6 for examples). All of the generated sentences are correct in the sense that the relation between the premise and the hypothesis is correctly labeled as entailment or contradiction (see Table 7 ). While we did not find any sentence pairs with wrong labels, some generated sentences are unnatural, as shown in Table 6 . Both unnatural examples contain two successive copies of the same PP.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 497, |
| "end": 504, |
| "text": "Table 6", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 690, |
| "end": 697, |
| "text": "Table 7", |
| "ref_id": "TABREF10" |
| }, |
| { |
| "start": 813, |
| "end": 820, |
| "text": "Table 6", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Filtering and Quality Control", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "E Few \u00d2 people \u00d3 be \u00d3 eat \u00d3 at \u00d3 red \u00d3 table \u00d3 in \u00d3 a \u00d3 restaurant \u00d3 without \u00d3 light \u00d2 Few \u00d2 large \u00d3 people \u00d3 be \u00d3 eat \u00d3 at \u00d3 red \u00d3 table \u00d3 in \u00d3 a \u00d3 Asian \u00d3 restaurant \u00d3 without \u00d3 light \u00d2 correct", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Filtering and Quality Control", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Note that our data generation hinges on correct polarities on the words and constituents. For instance, in the last example of Table 6 , the polarization system needs to know that few is downward entailing on both of its arguments, and without flips the arrow of its argument, in order to produce the correct polarities, on which the replacement of MonaLog depends.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 127, |
| "end": 134, |
| "text": "Table 6", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Filtering and Quality Control", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In order to filter unnatural sentences, such as the examples in Table 6 , we use a rule-based filter and remove sentences that contain bigrams of repeated words 9 . We experiment with using one quarter or 9 We also investigated using a bigram based language one half randomly selected sentences in addition to a setting where we use the complete set of generated sentences. Table 8 shows the amount of additional sentence pairs per category along with the results of using the automatically generated sentences as additional training data.", |
| "cite_spans": [ |
| { |
| "start": 205, |
| "end": 206, |
| "text": "9", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 64, |
| "end": 71, |
| "text": "Table 6", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 374, |
| "end": 381, |
| "text": "Table 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Filtering and Quality Control", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "It is obvious that adding the additional training data results in gains in accuracy even though the training data becomes increasingly skewed towards E and C. When we add all additional sentence pairs, accuracy increases by more than 1.5 percent points. This demonstrates both the robustness of BERT in the current experiment and the usefulness of the generated data. The more data we add, the better the system performs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We also see that raising the threshold to relabel uncertain cases as neutral gives a small boost, from 86.51% to 86.71%. This translates into 10 cases where the relabeling corrected the answer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Finally, we also investigated whether the hybrid system, i.e., MonaLog followed by the retrained BERT, can also profit from the additional training data. Intuitively, we would expect smaller gains since MonaLog already handles a fair amount of the entailments and contradictions, i.e., those cases where BERT profits from more examples. However the experiments show that the hybrid system reaches an even higher accuracy of 87.16%, more than 2 percent points above the model to filter out non-natural sentences. However, this affected the results negatively. Table 8 : Results of BERT trained on MonaLog-generated entailments and contradictions plus SICK.train (using the corrected SICK set).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 559, |
| "end": 566, |
| "text": "Table 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "baseline, equivalent to roughly 100 more problems correctly solved. Setting the high threshold for BERT to return E or C further improves accuracy to 87.49%. This brings us into the range of the state-of-the-art results, even though a direct comparison is not possible because of the differences between the corrected and uncorrected dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We have presented a working natural-logic-based system, MonaLog, which attains high accuracy on the SICK dataset and can be used to generated natural logic proofs. Considering how simple and straightforward our method is, we believe it can serve as a strong baseline or basis for other (much) more complicated systems, either logic-based or ML/DL-based. In addiction, we have shown that MonaLog can generate high-quality training data, which improves the accuracy of a deep learning model when trained on the expanded dataset. As a minor point, we manually checked the corrected SICK dataset by Kalouli et al. (2017 Kalouli et al. ( , 2018 .", |
| "cite_spans": [ |
| { |
| "start": 595, |
| "end": 615, |
| "text": "Kalouli et al. (2017", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 616, |
| "end": 639, |
| "text": "Kalouli et al. ( , 2018", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "There are several directions for future work. The first direction concerns the question how to handle syntactic variation from natural language input. That is, the computational process(es) for inference will usually be specified in terms of strict syntactic conditions, and naturally occurring sentences will typically not conform to those conditions. Among the strategies which allow their systems to better cope with premises and hypotheses with various syntactic structures are sophisticated versions of alignment used by e.g. MacCartney (2009) ; Yanaka et al. (2018) . We will need to extend MonaLog to be able to handle such variation. In the future, we plan to use dependency relations as representations of natural language input and train a classifier that can determine which relations are crucial for inference.", |
| "cite_spans": [ |
| { |
| "start": 531, |
| "end": 548, |
| "text": "MacCartney (2009)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 551, |
| "end": 571, |
| "text": "Yanaka et al. (2018)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Second, as mentioned earlier, we are in need of a fully (rather than partially) checked SICK dataset to examine the impact of data quality on the results since the partially checked dataset may be inherently inconsistent between the checked and non-checked parts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Finally, with regard to the machine learning experiments, we plan to investigate other methods of addressing the imbalance in the training set created by additional entailments and contradictions. We will look into options for artificially creating neutral examples, e.g. by finding reverse entailments 10 , as illustrated by Richardson et al. (2019) .", |
| "cite_spans": [ |
| { |
| "start": 326, |
| "end": 350, |
| "text": "Richardson et al. (2019)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Our correction can be found at: https://github.com/ huhailinguist/SICK correction", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/huhailinguist/ccg2mono", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "K means \"is contradictory to\". 4 There may be better and robust ways of incorporating WordNet relations to K; we leave this for future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/kkalouli/SICK-processing", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For the complete list of transformations see: https:// github.com/huhailinguist/SICK correction", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank the anonymous reviewers for their helpful comments. Hai Hu is supported by China Scholarship Council.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A tableau prover for natural logic and language", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lasha Abzianidze", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "2492--2502", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D15-1296" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lasha Abzianidze. 2015. A tableau prover for natu- ral logic and language. In Proceedings of EMNLP, pages 2492-2502.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "LangPro: Natural language theorem prover", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lasha Abzianidze", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of EMNLP: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "115--120", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lasha Abzianidze. 2017. LangPro: Natural language theorem prover. In Proceedings of EMNLP: System Demonstrations, pages 115-120, Copenhagen, Den- mark.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Nat-uralLI: Natural logic inference for common sense reasoning", |
| "authors": [ |
| { |
| "first": "Gabor", |
| "middle": [], |
| "last": "Angeli", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "534--545", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gabor Angeli and Christopher D. Manning. 2014. Nat- uralLI: Natural logic inference for common sense reasoning. In Proceedings of EMNLP, pages 534- 545, Doha, Qatar.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Representing meaning with a combination of logical and 10 In the set relations by MacCartney (2009), if A \u00c4 B, then A entails B, but B is neutral to A. distributional models", |
| "authors": [ |
| { |
| "first": "Islam", |
| "middle": [], |
| "last": "Beltagy", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Roller", |
| "suffix": "" |
| }, |
| { |
| "first": "Pengxiang", |
| "middle": [], |
| "last": "Cheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Katrin", |
| "middle": [], |
| "last": "Erk", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond J", |
| "middle": [], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Computational Linguistics", |
| "volume": "42", |
| "issue": "4", |
| "pages": "763--808", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Islam Beltagy, Stephen Roller, Pengxiang Cheng, Ka- trin Erk, and Raymond J Mooney. 2016. Repre- senting meaning with a combination of logical and 10 In the set relations by MacCartney (2009), if A \u00c4 B, then A entails B, but B is neutral to A. distributional models. Computational Linguistics, 42(4):763-808.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Essays in Logical Semantics, volume 29 of Studies in Linguistics and Philosophy", |
| "authors": [ |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Van Benthem", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johan van Benthem. 1986. Essays in Logical Seman- tics, volume 29 of Studies in Linguistics and Philos- ophy. D. Reidel Publishing Co., Dordrecht.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "The meaning factory: Formal semantics for recognizing textual entailment and determining semantic similarity", |
| "authors": [ |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Bjerva", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Bos", |
| "suffix": "" |
| }, |
| { |
| "first": "Rob", |
| "middle": [], |
| "last": "Van Der Goot", |
| "suffix": "" |
| }, |
| { |
| "first": "Malvina", |
| "middle": [], |
| "last": "Nissim", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "642--646", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johannes Bjerva, Johan Bos, Rob Van der Goot, and Malvina Nissim. 2014. The meaning factory: For- mal semantics for recognizing textual entailment and determining semantic similarity. In Proceedings of the 8th International Workshop on Semantic Eval- uation (SemEval 2014), pages 642-646.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Recognising Textual Entailment with Logical Inference", |
| "authors": [ |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Bos", |
| "suffix": "" |
| }, |
| { |
| "first": "Katja", |
| "middle": [], |
| "last": "Markert", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johan Bos and Katja Markert. 2005. Recognising Tex- tual Entailment with Logical Inference. In Proceed- ings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A large annotated corpus for learning natural language inference", |
| "authors": [ |
| { |
| "first": "Gabor", |
| "middle": [], |
| "last": "Samuel R Bowman", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Angeli", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Potts", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "632--642", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of EMNLP, pages 632-642.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Supervised learning of universal sentence representations from natural language inference data", |
| "authors": [ |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Douwe", |
| "middle": [], |
| "last": "Kiela", |
| "suffix": "" |
| }, |
| { |
| "first": "Holger", |
| "middle": [], |
| "last": "Schwenk", |
| "suffix": "" |
| }, |
| { |
| "first": "Loic", |
| "middle": [], |
| "last": "Barrault", |
| "suffix": "" |
| }, |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Bordes", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1705.02364" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Using the framework", |
| "authors": [ |
| { |
| "first": "Robin", |
| "middle": [], |
| "last": "Cooper", |
| "suffix": "" |
| }, |
| { |
| "first": "Dick", |
| "middle": [], |
| "last": "Crouch", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Van Eijck", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Fox", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Van Genabith", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Jaspars", |
| "suffix": "" |
| }, |
| { |
| "first": "Hans", |
| "middle": [], |
| "last": "Kamp", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Milward", |
| "suffix": "" |
| }, |
| { |
| "first": "Manfred", |
| "middle": [], |
| "last": "Pinkal", |
| "suffix": "" |
| }, |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robin Cooper, Dick Crouch, Jan Van Eijck, Chris Fox, Johan Van Genabith, Jan Jaspars, Hans Kamp, David Milward, Manfred Pinkal, Massimo Poesio, et al. 1996. Using the framework. Technical re- port, Technical Report LRE 62-051 D-16, The Fra- CaS Consortium.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "The PASCAL Recognizing Textual Entailment Challenge", |
| "authors": [ |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Ido Dagan", |
| "suffix": "" |
| }, |
| { |
| "first": "Bernardo", |
| "middle": [], |
| "last": "Glickman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Magnini", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the PASCAL Challenges Workshop on Recognizing Textual Entailment", |
| "volume": "", |
| "issue": "", |
| "pages": "177--190", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL Recognizing Textual Entail- ment Challenge. In Proceedings of the PASCAL Challenges Workshop on Recognizing Textual En- tailment, pages 177-190.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "4171--4186", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N19-1423" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language un- derstanding. In Proceedings of NAACL, pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Annotation artifacts in natural language inference data", |
| "authors": [ |
| { |
| "first": "Swabha", |
| "middle": [], |
| "last": "Suchin Gururangan", |
| "suffix": "" |
| }, |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Swayamdipta", |
| "suffix": "" |
| }, |
| { |
| "first": "Roy", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah A", |
| "middle": [], |
| "last": "Bowman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of NAACL: HLT", |
| "volume": "2", |
| "issue": "", |
| "pages": "107--112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A Smith. 2018. Annotation artifacts in natural lan- guage inference data. In Proceedings of NAACL: HLT, volume 2, pages 107-112.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Natural language inference with monotonicity", |
| "authors": [ |
| { |
| "first": "Hai", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Lawrence S", |
| "middle": [], |
| "last": "Moss", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 13th International Conference on Computational Semantics (IWCS)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hai Hu, Qi Chen, and Lawrence S Moss. 2019. Natural language inference with monotonicity. In Proceed- ings of the 13th International Conference on Com- putational Semantics (IWCS).", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Polarity computations in flexible categorial grammar", |
| "authors": [ |
| { |
| "first": "Hai", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Lawrence", |
| "middle": [ |
| "S" |
| ], |
| "last": "Moss", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "124--129", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hai Hu and Lawrence S. Moss. 2018. Polarity compu- tations in flexible categorial grammar. In Proceed- ings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 124-129.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Livy Real, Martha Palmer, and Valeria dePaiva", |
| "authors": [ |
| { |
| "first": "Aikaterini-Lida", |
| "middle": [], |
| "last": "Kalouli", |
| "suffix": "" |
| }, |
| { |
| "first": "Annebeth", |
| "middle": [], |
| "last": "Buis", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 13th Linguistic Annotation Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "132--143", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aikaterini-Lida Kalouli, Annebeth Buis, Livy Real, Martha Palmer, and Valeria dePaiva. 2019. Explain- ing simple natural language inference. In Proceed- ings of the 13th Linguistic Annotation Workshop, pages 132-143.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Textual inference: Getting logic from humans", |
| "authors": [ |
| { |
| "first": "Aikaterini-Lida", |
| "middle": [], |
| "last": "Kalouli", |
| "suffix": "" |
| }, |
| { |
| "first": "Livy", |
| "middle": [], |
| "last": "Real", |
| "suffix": "" |
| }, |
| { |
| "first": "Valeria", |
| "middle": [], |
| "last": "De Paiva", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "IWCS 2017: 12th International Conference on Computational Semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aikaterini-Lida Kalouli, Livy Real, and Valeria de Paiva. 2017. Textual inference: Getting logic from humans. In IWCS 2017: 12th International Conference on Computational Semantics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Wordnet for easy textual inferences", |
| "authors": [ |
| { |
| "first": "Aikaterini-Lida", |
| "middle": [], |
| "last": "Kalouli", |
| "suffix": "" |
| }, |
| { |
| "first": "Livy", |
| "middle": [], |
| "last": "Real", |
| "suffix": "" |
| }, |
| { |
| "first": "Valeria", |
| "middle": [], |
| "last": "De Paiva", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aikaterini-Lida Kalouli, Livy Real, and Valeria de Paiva. 2018. Wordnet for easy textual infer- ences. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC), Miyazaki, Japan.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Scitail: A textual entailment dataset from science question answering", |
| "authors": [ |
| { |
| "first": "Tushar", |
| "middle": [], |
| "last": "Khot", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Sabharwal", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. Scitail: A textual entailment dataset from science question answering. In Thirty-Second AAAI Con- ference on Artificial Intelligence.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Natural Language Inference", |
| "authors": [ |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Maccartney", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bill MacCartney. 2009. Natural Language Inference. Ph.D. thesis, Stanford University.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "A phrase-based alignment model for natural language inference", |
| "authors": [ |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Maccartney", |
| "suffix": "" |
| }, |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "802--811", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bill MacCartney, Michel Galley, and Christopher D Manning. 2008. A phrase-based alignment model for natural language inference. In Proceedings of EMNLP, pages 802-811. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Modeling semantic containment and exclusion in natural language inference", |
| "authors": [ |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Maccartney", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of COL-ING", |
| "volume": "", |
| "issue": "", |
| "pages": "521--528", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bill MacCartney and Christopher D Manning. 2008. Modeling semantic containment and exclusion in natural language inference. In Proceedings of COL- ING, pages 521-528.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "An extended model of natural logic", |
| "authors": [ |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Maccartney", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "IWCS-8, Proceedings of the Eighth International Conference on Computational Semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "140--156", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bill MacCartney and Christopher D. Manning. 2009. An extended model of natural logic. In IWCS-8, Proceedings of the Eighth International Conference on Computational Semantics, pages 140-156.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "A SICK cure for the evaluation of compositional distributional semantic models", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Marelli", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Menini", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Bentivogli", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Bernardi", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Zamparelli", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Marelli, S. Menini, M. Baroni, L. Bentivogli, R. Bernardi, and R. Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional se- mantic models. In Proceedings of LREC 2014.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "ccg2lambda: A compositional semantics system", |
| "authors": [ |
| { |
| "first": "Pascual", |
| "middle": [], |
| "last": "Mart\u00ednez-G\u00f3mez", |
| "suffix": "" |
| }, |
| { |
| "first": "Koji", |
| "middle": [], |
| "last": "Mineshima", |
| "suffix": "" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| }, |
| { |
| "first": "Daisuke", |
| "middle": [], |
| "last": "Bekki", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of ACL 2016 System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "85--90", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pascual Mart\u00ednez-G\u00f3mez, Koji Mineshima, Yusuke Miyao, and Daisuke Bekki. 2016. ccg2lambda: A compositional semantics system. In Proceedings of ACL 2016 System Demonstrations, pages 85- 90, Berlin, Germany. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "On-demand injection of lexical knowledge for recognising textual entailment", |
| "authors": [ |
| { |
| "first": "Pascual", |
| "middle": [], |
| "last": "Mart\u00ednez-G\u00f3mez", |
| "suffix": "" |
| }, |
| { |
| "first": "Koji", |
| "middle": [], |
| "last": "Mineshima", |
| "suffix": "" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| }, |
| { |
| "first": "Daisuke", |
| "middle": [], |
| "last": "Bekki", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "710--720", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pascual Mart\u00ednez-G\u00f3mez, Koji Mineshima, Yusuke Miyao, and Daisuke Bekki. 2017. On-demand in- jection of lexical knowledge for recognising textual entailment. In Proceedings of EACL, pages 710- 720.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Mccoy", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellie", |
| "middle": [], |
| "last": "Pavlick", |
| "suffix": "" |
| }, |
| { |
| "first": "Tal", |
| "middle": [], |
| "last": "Linzen", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1902.01007" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "R Thomas McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the Wrong Reasons: Diagnosing Syntac- tic Heuristics in Natural Language Inference. arXiv preprint arXiv:1902.01007.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Wordnet: a lexical database for English", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "George", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Communications of the ACM", |
| "volume": "38", |
| "issue": "11", |
| "pages": "39--41", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George A Miller. 1995. Wordnet: a lexical database for English. Communications of the ACM, 38(11):39- 41.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Higher-order logical inference with compositional semantics", |
| "authors": [ |
| { |
| "first": "Koji", |
| "middle": [], |
| "last": "Mineshima", |
| "suffix": "" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Mart\u00ednez-G\u00f3mez", |
| "suffix": "" |
| }, |
| { |
| "first": "Daisuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bekki", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "2055--2061", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Koji Mineshima, Pascual Mart\u00ednez-G\u00f3mez, Yusuke Miyao, and Daisuke Bekki. 2015. Higher-order log- ical inference with compositional semantics. In Pro- ceedings of EMNLP, pages 2055-2061.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Stress test evaluation for natural language inference", |
| "authors": [ |
| { |
| "first": "Aakanksha", |
| "middle": [], |
| "last": "Naik", |
| "suffix": "" |
| }, |
| { |
| "first": "Abhilasha", |
| "middle": [], |
| "last": "Ravichander", |
| "suffix": "" |
| }, |
| { |
| "first": "Norman", |
| "middle": [], |
| "last": "Sadeh", |
| "suffix": "" |
| }, |
| { |
| "first": "Carolyn", |
| "middle": [], |
| "last": "Rose", |
| "suffix": "" |
| }, |
| { |
| "first": "Graham", |
| "middle": [], |
| "last": "Neubig", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1806.00692" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. arXiv preprint arXiv:1806.00692.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Hypothesis only baselines in natural language inference", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Poliak", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Naradowsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Aparajita", |
| "middle": [], |
| "last": "Haldar", |
| "suffix": "" |
| }, |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Rudinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "180--191", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language in- ference. In Proceedings of the Seventh Joint Con- ference on Lexical and Computational Semantics, pages 180-191.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Probing natural language inference models through semantic fragments", |
| "authors": [ |
| { |
| "first": "Kyle", |
| "middle": [], |
| "last": "Richardson", |
| "suffix": "" |
| }, |
| { |
| "first": "Hai", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Lawrence", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Moss", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sabharwal", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1909.07521" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kyle Richardson, Hai Hu, Lawrence S Moss, and Ashish Sabharwal. 2019. Probing natural lan- guage inference models through semantic frag- ments. arXiv preprint arXiv:1909.07521.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "A confidence model for syntactically-motivated entailment proofs", |
| "authors": [ |
| { |
| "first": "Asher", |
| "middle": [], |
| "last": "Stern", |
| "suffix": "" |
| }, |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of RANLP", |
| "volume": "", |
| "issue": "", |
| "pages": "455--462", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Asher Stern and Ido Dagan. 2011. A confidence model for syntactically-motivated entailment proofs. In Proceedings of RANLP, pages 455-462.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "\u0141ukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "5998--6008", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "A broad-coverage challenge corpus for sentence understanding through inference", |
| "authors": [ |
| { |
| "first": "Adina", |
| "middle": [], |
| "last": "Williams", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikita", |
| "middle": [], |
| "last": "Nangia", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "Bowman", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of NAACL-HLT", |
| "volume": "1", |
| "issue": "", |
| "pages": "1112--1122", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of NAACL-HLT, volume 1, pages 1112-1122.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Acquisition of phrase correspondences using natural deduction proofs", |
| "authors": [ |
| { |
| "first": "Hitomi", |
| "middle": [], |
| "last": "Yanaka", |
| "suffix": "" |
| }, |
| { |
| "first": "Koji", |
| "middle": [], |
| "last": "Mineshima", |
| "suffix": "" |
| }, |
| { |
| "first": "Pascual", |
| "middle": [], |
| "last": "Mart\u00ednez-G\u00f3mez", |
| "suffix": "" |
| }, |
| { |
| "first": "Daisuke", |
| "middle": [], |
| "last": "Bekki", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of NAACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "756--766", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hitomi Yanaka, Koji Mineshima, Pascual Mart\u00ednez- G\u00f3mez, and Daisuke Bekki. 2018. Acquisition of phrase correspondences using natural deduction proofs. In Proceedings of NAACL-HLT, pages 756- 766, New Orleans, LA.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Taskspecific attentive pooling of phrase alignments contributes to sentence matching", |
| "authors": [ |
| { |
| "first": "Wenpeng", |
| "middle": [], |
| "last": "Yin", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "699--709", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wenpeng Yin and Hinrich Sch\u00fctze. 2017. Task- specific attentive pooling of phrase alignments con- tributes to sentence matching. In Proceedings of EACL, pages 699-709.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "No schoolgirl is on a crowded train A schoolgirl with a bag is not on a crowded train ... c o n tr a d ic ti o n c o n tr a d ic ti o n c o n tr a d ic ti o n", |
| "type_str": "figure", |
| "num": null |
| }, |
| "TABREF2": { |
| "html": null, |
| "content": "<table><tr><td>total N \u00d1 E E \u00d1 C N \u00d1 C E \u00d1 N 409 14 7 190 198</td></tr></table>", |
| "text": "Examples from SICK(Marelli et al., 2014) and corrected SICK(Kalouli et al., 2017(Kalouli et al., , 2018 w/ syntactic variations. n.a.:example not checked by Kalouli and her colleagues. C: contradiction; E: entailment; N: neutral.", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "html": null, |
| "content": "<table/>", |
| "text": "", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "html": null, |
| "content": "<table/>", |
| "text": "Performance on the SICK test set, original SICK above and corrected SICK below. P / R for Mon-aLog averaged across three labels. Results involving BERT are averaged across six runs; same for later experiments.", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "html": null, |
| "content": "<table><tr><td>E</td><td>C</td><td/><td>N</td><td/><td/></tr><tr><td>P</td><td>R</td><td>P</td><td>R</td><td>P</td><td>R</td></tr><tr><td>uncorr.</td><td/><td/><td/><td/><td/></tr></table>", |
| "text": "SICK 97.75 46.74 80.06 70.24 73.43 94.99 corr. SICK 98.50 50.46 95.02 73.60 76.22 98.63", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF7": { |
| "html": null, |
| "content": "<table><tr><td>id premise</td><td>hypothesis</td><td colspan=\"3\">SICK corr. SICK Mona</td></tr><tr><td>359 There is no dog chasing an-</td><td>Two dogs are running and</td><td>N</td><td>n.a.</td><td>C</td></tr><tr><td>other or holding a stick in its</td><td>carrying an object in their</td><td/><td/><td/></tr><tr><td>mouth</td><td>mouths</td><td/><td/><td/></tr><tr><td>1402 A man is crying</td><td>A man is screaming</td><td>N</td><td>n.a.</td><td>E</td></tr><tr><td colspan=\"2\">1760 A flute is being played by a girl There is no woman playing a</td><td>N</td><td>n.a.</td><td>C</td></tr><tr><td/><td>flute</td><td/><td/><td/></tr><tr><td>2897 The man is lifting weights</td><td colspan=\"2\">The man is lowering barbells N</td><td>n.a.</td><td>E</td></tr><tr><td>2922 A herd of caribous is not cross-</td><td>A herd of deer is crossing a</td><td>N</td><td>n.a.</td><td>C</td></tr><tr><td>ing a road</td><td>street</td><td/><td/><td/></tr><tr><td>3403 A man is folding a tortilla</td><td>A man is unfolding a tortilla</td><td>N</td><td>n.a.</td><td>C</td></tr><tr><td>4333 A woman is picking a can</td><td>A woman is taking a can</td><td>E</td><td>N</td><td>E</td></tr><tr><td>5138 A man is doing a card trick</td><td colspan=\"2\">A man is doing a magic trick N</td><td>n.a.</td><td>E</td></tr><tr><td>5793 A man is cutting a fish</td><td/><td/><td/><td/></tr></table>", |
| "text": "Results of MonaLog per relation. C: contradiction; E: entailment; N: neutral.", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF9": { |
| "html": null, |
| "content": "<table><tr><td colspan=\"5\">label total correct wrong unnatural</td></tr><tr><td>E</td><td>56</td><td>49</td><td>0</td><td>7</td></tr><tr><td>C</td><td>44</td><td>41</td><td>0</td><td>3</td></tr></table>", |
| "text": "Sentence pairs generated by MonaLog, lemmatized.", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF10": { |
| "html": null, |
| "content": "<table/>", |
| "text": "Quality of 100 manually inspected sentences.", |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |