ACL-OCL / Base_JSON /prefixP /json /P16 /P16-1042.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P16-1042",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:57:49.087403Z"
},
"title": "Combining Natural Logic and Shallow Reasoning for Question Answering",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": "",
"affiliation": {},
"email": "angeli@cs.stanford.edu"
},
{
"first": "Neha",
"middle": [],
"last": "Nayak",
"suffix": "",
"affiliation": {},
"email": "nayakne@cs.stanford.edu"
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": "",
"affiliation": {},
"email": "manning@cs.stanford.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Broad domain question answering is often difficult in the absence of structured knowledge bases, and can benefit from shallow lexical methods (broad coverage) and logical reasoning (high precision). We propose an approach for incorporating both of these signals in a unified framework based on natural logic. We extend the breadth of inferences afforded by natural logic to include relational entailment (e.g., buy \u2192 own) and meronymy (e.g., a person born in a city is born the city's country). Furthermore, we train an evaluation function-akin to gameplayingto evaluate the expected truth of candidate premises on the fly. We evaluate our approach on answering multiple choice science questions, achieving strong results on the dataset.",
"pdf_parse": {
"paper_id": "P16-1042",
"_pdf_hash": "",
"abstract": [
{
"text": "Broad domain question answering is often difficult in the absence of structured knowledge bases, and can benefit from shallow lexical methods (broad coverage) and logical reasoning (high precision). We propose an approach for incorporating both of these signals in a unified framework based on natural logic. We extend the breadth of inferences afforded by natural logic to include relational entailment (e.g., buy \u2192 own) and meronymy (e.g., a person born in a city is born the city's country). Furthermore, we train an evaluation function-akin to gameplayingto evaluate the expected truth of candidate premises on the fly. We evaluate our approach on answering multiple choice science questions, achieving strong results on the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Question answering is an important task in NLP, and becomes both more important and more difficult when the answers are not supported by handcurated knowledge bases. In these cases, viewing question answering as textual entailment over a very large premise set can offer a means of generalizing reliably to open domain questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A natural approach to textual entailment is to treat it as a logical entailment problem. However, this high-precision approach is not feasible in cases where a formal proof is difficult or impossible. For example, consider the following hypothesis (H) and its supporting premise (P) for the question Which part of a plant produces the seeds?: P: Ovaries are the female part of the flower, which produces eggs that are needed for making seeds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "H: A flower produces the seeds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This requires a relatively large amount of inference: the most natural atomic fact in the sentence is that ovaries produce eggs. These inferences are feasible in a limited domain, but become difficult the more open-domain reasoning they require. In contrast, even a simple lexical overlap classifier could correctly predict the entailment. In fact, such a bag-of-words entailment model has been shown to be surprisingly effective on the Recognizing Textual Entailment (RTE) challenges (Mac-Cartney, 2009 ). On the other hand, such methods are also notorious for ignoring even trivial cases of nonentailment that are easy for natural logic, e.g., recognizing negation in the example below: P: Eating candy for dinner is an example of a poor health habit.",
"cite_spans": [
{
"start": 485,
"end": 503,
"text": "(Mac-Cartney, 2009",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "H: Eating candy is an example of a good health habit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present an approach to leverage the benefits of both methods. Natural logic -a proof theory over the syntax of natural language -offers a framework for logical inference which is already familiar to lexical methods. As an inference system searches for a valid premise, the candidates it explores can be evaluated on their similarity to a premise by a conventional lexical classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We therefore extend a natural logic inference engine in two key ways: first, we handle relational entailment and meronymy, increasing the total number of inferences that can be made. We further implement an evaluation function which quickly provides an estimate for how likely a candidate premise is to be supported by the knowledge base, without running the full search. This can then more easily match a known premise despite still not matching exactly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present the following contributions: (1) we extend the classes of inferences NaturalLI can perform on real-world sentences by incorporating relational entailment and meronymy, and by operat-ing over dependency trees; (2) we augment Nat-uralLI with an evaluation function to provide an estimate of entailment for any query; and (3) we run our system over the Aristo science questions corpus, achieving the strong results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We briefly review natural logic and NaturalLIthe existing inference engine we use. Much of this paper will extend this system, with additional inferences (Section 3) and a soft lexical classifier (Section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Natural logic is a formal proof theory that aims to capture a subset of logical inferences by appealing directly to the structure of language, without needing either an abstract logical language (e.g., Markov Logic Networks; Richardson and Domingos (2006) ) or denotations (e.g., semantic parsing; Liang and Potts (2015) ). We use the logic introduced by the NatLog system (MacCartney and Manning, 2007; 2008; 2009) , which was in turn based on earlier theoretical work on Monotonicity Calculus (van Benthem, 1986; S\u00e1nchez Valencia, 1991) . We adopt the precise semantics of Icard and Moss (2014) ; we refer the reader to this paper for a more thorough introduction to the formalism.",
"cite_spans": [
{
"start": 225,
"end": 255,
"text": "Richardson and Domingos (2006)",
"ref_id": "BIBREF32"
},
{
"start": 298,
"end": 320,
"text": "Liang and Potts (2015)",
"ref_id": "BIBREF22"
},
{
"start": 373,
"end": 403,
"text": "(MacCartney and Manning, 2007;",
"ref_id": "BIBREF24"
},
{
"start": 404,
"end": 409,
"text": "2008;",
"ref_id": "BIBREF25"
},
{
"start": 410,
"end": 415,
"text": "2009)",
"ref_id": "BIBREF27"
},
{
"start": 495,
"end": 514,
"text": "(van Benthem, 1986;",
"ref_id": "BIBREF37"
},
{
"start": 515,
"end": 538,
"text": "S\u00e1nchez Valencia, 1991)",
"ref_id": "BIBREF34"
},
{
"start": 575,
"end": 596,
"text": "Icard and Moss (2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Logic",
"sec_num": "2.1"
},
{
"text": "At a high level, natural logic proofs operate by mutating spans of text to ensure that the mutated sentence follows from the original -each step is much like a syllogistic inference. Each mutation in the proof follows three steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Logic",
"sec_num": "2.1"
},
{
"text": "1. An atomic lexical relation is induced by either inserting, deleting or mutating a span in the sentence. For example, in Figure 1 , mutating The to No induces the relation; mutating cat to carnivore induces the relation. The relations \u2261 and are variants of entailment; and are variants of negation. 2. This lexical relation between words is projected up to yield a relation between sentences, based on the polarity of the token. For instance, The cat eats animals some carnivores eat animals. We explain this in more detail below. 3. These sentence level relations are joined together to produce a relation between a premise, and a hypothesis multiple mutations away. For example in Figure 1 , if we join , \u2261, , and , we get negation ( ).",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 131,
"text": "Figure 1",
"ref_id": null
},
{
"start": 685,
"end": 693,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Natural Logic",
"sec_num": "2.1"
},
{
"text": "The notion of projecting a relation from a lexical item to a sentence is important to understand. 1 To illustrate, cat animal, and some cat meows some animal meows (recall, denotes entailment), but no cat barks no animal barks. Despite differing by the same lexical relation, the sentence-level relation is different in the two cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Logic",
"sec_num": "2.1"
},
{
"text": "We appeal to two important concepts: monotonicity -a property of arguments to natural language operators; and polarity -a property of tokens. From the example above, some is monotone in its first argument (i.e., cat or animal), and no is antitone in its first argument. This means that the first argument to some is allowed to mutate up the specified hierarchy (e.g., hypernymy), whereas the first argument to no is allowed to mutate down.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Logic",
"sec_num": "2.1"
},
{
"text": "Polarity is a property of tokens in a sentence determined by the operators acting on it. All lexical items have upward polarity by default; monotone operators -like some, several, or a few -preserve polarity. Antitone operators -like no, not, and all (in its first argument) -reverse polarity. For example, mice in no cats eat mice has downward polarity, whereas mice in no cats don't eat mice has upward polarity (it is in the scope of two downward monotone operators).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Logic",
"sec_num": "2.1"
},
{
"text": "As a final note, although we refer to the monotonicity calculus described above as natural logic, this formalism is only one of many possible natural logics. For example, McAllester and Givan (1992) introduce a syntax for first order logic which they call Montagovian syntax. This syntax has two key advantages over first order logic: first, the \"quantifier-free\" version of the syntax (roughly equivalent to the monotonicity calculus we use) is computationally efficient while still handling limited quantification. Second, the syntax more closely mirrors that of natural language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Logic",
"sec_num": "2.1"
},
{
"text": "We build our extensions within the framework of NaturalLI, introduced by Angeli and Manning (2014) . NaturalLI casts inference as a search problem: given a hypothesis and an arbitrarily large corpus of text, it searches through the space of lexical mutations (e.g., cat \u2192 carnivore), with associated costs, until a premise is found.",
"cite_spans": [
{
"start": 73,
"end": 98,
"text": "Angeli and Manning (2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NaturalLI",
"sec_num": "2.2"
},
{
"text": "An example search using NaturalLI is given in Figure 1 . The relations along the edges denote re- Figure 1 : An illustration of NaturalLI searching for a candidate premise to support the hypothesis at the root of the tree. We are searching from a hypothesis no carnivores eat animals, and find a contradicting premise the cat ate a mouse. The edge labels denote Natural Logic inference steps.",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 54,
"text": "Figure 1",
"ref_id": null
},
{
"start": 98,
"end": 106,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "NaturalLI",
"sec_num": "2.2"
},
{
"text": "lations between the associated sentences -i.e., the projected lexical relations from Section 2.2. Importantly, and in contrast with traditional entailment systems, NaturalLI searches over an arbitrarily large knowledge base of textual premises rather than a single premise/hypothesis pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NaturalLI",
"sec_num": "2.2"
},
{
"text": "We extend NaturalLI in three ways to improve its coverage. We adapt the search algorithm to operate over dependency trees rather than the surface forms (Section 3.1). We enrich the class of inferences warranted by natural logic beyond hypernymy and operator rewording to also encompass meronymy and relational entailment (Section 3.2). Lastly, we handle token insertions during search more elegantly (Section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improving Inference in NaturalLI",
"sec_num": "3"
},
{
"text": "The general search algorithm in NaturalLI is parametrized as follows: First, an order is chosen to traverse the tokens in a sentence. For example, the original paper traverses tokens left-toright. At each token, one of three operations can be performed: deleting a token (corresponding to inserting a word in the proof derivation), mutating a token, and inserting a token (corresponding to deleting a token in the proof derivation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improving Inference in NaturalLI",
"sec_num": "3"
},
{
"text": "Operating over dependency trees rather than a token sequence requires reworking (1) the semantics of deleting a token during search, and (2) the order in which the sentence is traversed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural logic over Dependency Trees",
"sec_num": "3.1"
},
{
"text": "We recently defined a mapping from Stanford Dependency relations to the associated lexical relation deleting the dependent subtree would induce (Angeli et al., 2015) . We adapt this mapping to yield the relation induced by inserting a given dependency edge, corresponding to our deletions in search; we also convert the mapping to use Universal Dependencies (de Marneffe et al., 2014) . This now lends a natural deletion operation: at a given node, the subtree rooted at that node can be deleted to induce the associated natural logic relation.",
"cite_spans": [
{
"start": 144,
"end": 165,
"text": "(Angeli et al., 2015)",
"ref_id": "BIBREF1"
},
{
"start": 358,
"end": 384,
"text": "(de Marneffe et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Natural logic over Dependency Trees",
"sec_num": "3.1"
},
{
"text": "For example, we can infer that all truly notorious villains have lairs from the premise all villains have lairs by observing that deleting an amod arc induces the relation , which in the downward polarity context of villains \u2193 projects to (entailment):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural logic over Dependency Trees",
"sec_num": "3.1"
},
{
"text": "All \u2191 truly \u2193 notorious \u2193 villains \u2193 have \u2191 lairs \u2191 . operator nsubj amod advmod dobj",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural logic over Dependency Trees",
"sec_num": "3.1"
},
{
"text": "An admittedly rare but interesting subtlety in the order we chose to traverse the tokens in the sentence is the effect mutating an operator has on the polarity of its arguments. For example, mutating some to all changes the polarity of its first argument. There are cases where we must mutate the argument to the operator before the operator itself, as well as cases where we must mutate the operator before its arguments. Consider, for instance: where we must first mutate some to all. Therefore, our traversal first visits each operator, then performs a breadth-first traversal of the tree, and then visits each operator a second time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural logic over Dependency Trees",
"sec_num": "3.1"
},
{
"text": "P: All",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Natural logic over Dependency Trees",
"sec_num": "3.1"
},
{
"text": "Although natural logic and the underlying monotonicity calculus has only been explored in the context of hypernymy, the underlying framework can be applied to any partial order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meronymy and Relational Entailment",
"sec_num": "3.2"
},
{
"text": "Natural language operators can be defined as a mapping from denotations of objects to truth values. The domain of word denotations is then or- dered by the subset operator, corresponding to ordering by hypernymy over the words. 2 However, hypernymy is not the only useful partial ordering over denotations. We include two additional orderings as motivating examples: relational entailment and meronymy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meronymy and Relational Entailment",
"sec_num": "3.2"
},
{
"text": "Relational Entailment For two verbs v 1 and v 2 , we define v 1 \u2264 v 2 if the first verb entails the second. In many cases, a verb v 1 may entail a verb v 2 even if v 2 is not a hypernym of v 1 . For example, to sell something (hopefully) entails owning that thing. Apart from context-specific cases (e.g., orbit entails launch only for man-made objects), these hold largely independent of context. Note that the usual operators apply to relational entailments -if all cactus owners live in Arizona then all cactus sellers live in Arizona.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meronymy and Relational Entailment",
"sec_num": "3.2"
},
{
"text": "This information was incorporated using data from VERBOCEAN (Chklovski and Pantel, 2004) , adapting the confidence weights as transition costs. VERBOCEAN uses lexicosyntactic patterns to score pairs of verbs as candidate participants in a set of relations. We approximate the VERBOCEAN relations stronger -than(v 1 , v 2 ) (e.g., to kill is stronger than to wound) and happens-before(v 2 , v 1 ) (e.g., buying happens before owning) to indicate that v 1 entails v 2 . These verb entailment transitions are incorporated using costs derived from the original weights from Chklovski and Pantel (2004) .",
"cite_spans": [
{
"start": 60,
"end": 88,
"text": "(Chklovski and Pantel, 2004)",
"ref_id": "BIBREF8"
},
{
"start": 570,
"end": 597,
"text": "Chklovski and Pantel (2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Meronymy and Relational Entailment",
"sec_num": "3.2"
},
{
"text": "Meronymy The most salient use-case for meronymy is with locations. For example, if Obama was born in Hawaii, then we know that Obama was born in America, because Hawaii is a meronym of (part of) America. Unlike relational entailment and hypernymy, meronymy is operated on by a distinct set of operators: if Hawaii is an island, we cannot necessarily entail that America is an island.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meronymy and Relational Entailment",
"sec_num": "3.2"
},
{
"text": "We semi-automatically collect a set of 81 operators (e.g., born in, visited) which then compose in the usual way with the conventional operators (e.g., some, all). These operators consist of dependency paths of length 2 that co-occurred in newswire text with a named entity of type PER-SON and two different named entities of type LO-CATION, such that one location was a meronym of the other. All other operators are considered nonmonotone with respect to the meronym hierarchy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meronymy and Relational Entailment",
"sec_num": "3.2"
},
{
"text": "Note that these are not the only two orders that can be incorporated into our framework; they just happen to be two which have lexical resources available and are likely to be useful in real-world entailment tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Meronymy and Relational Entailment",
"sec_num": "3.2"
},
{
"text": "Inserting words during search poses an inherent problem, as the space of possible words to insert at any position is on the order of the size of the vocabulary. In NaturalLI, this was solved by keeping a trie of possible insertions, and using that to prune this space. This is both computationally slow and adapts awkwardly to a search over dependency trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Removing the Insertion Transition",
"sec_num": "3.3"
},
{
"text": "Therefore, this work instead opts to perform a bidirectional search: when constructing the knowledge base, we add not only the original sentence but also all entailments with subtrees deleted. For example, a premise of some furry cats have tails would yield two facts for the knowledge base: some furry cats have tails as well as some cats have tails. For this, we use the process described in Angeli et al. (2015) to generate short entailed sentences from a long utterance using natural logic. This then leaves the reverse search to only deal with mutations and inference insertions, which are relatively easier.",
"cite_spans": [
{
"start": 394,
"end": 414,
"text": "Angeli et al. (2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Removing the Insertion Transition",
"sec_num": "3.3"
},
{
"text": "The new challenge this introduces, of course, is the additional space required to store the new facts. To mitigate this, we hash every fact into a 64 bit integer, and store only the hashed value in the knowledge base. We construct this hash function such that it operates over a bag of edges in the dependency tree. This has two key properties: it allows us to be invariant to the word order of of the sentence, and more importantly it allows us to run our search directly over modifications to this hash function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Removing the Insertion Transition",
"sec_num": "3.3"
},
{
"text": "To elaborate, we notice that each of the two classes of operations our search is performing are done locally over a single dependency edge. When adding an edge, we can simply take the XOR of the hash saved in the parent state and the hash of the added edge. When mutating an edge, we XOR the hash of the parent state with the edge we are mutating, and again with the mutated edge. In this way, each search node need only carry an 8 byte hash, local information about the edge currently being considered (8 bytes), global information about the words deleted during search (5 bytes), a 3 byte backpointer to recover the inference path, and 8 bytes of operator metadata -32 bytes in all, amounting to exactly half a cache line on our machines. This careful attention to data structures and memory layout turn out to have a large impact on runtime efficiency. More details are given in Angeli (2016) .",
"cite_spans": [
{
"start": 882,
"end": 895,
"text": "Angeli (2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Removing the Insertion Transition",
"sec_num": "3.3"
},
{
"text": "There are many cases -particularly as the length of the premise and the hypothesis grow -where despite our improvements NaturalLI will fail to find any supporting premises; for example: P: Food serves mainly for growth, energy and body repair, maintenance and protection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Evaluation Function for NaturalLI",
"sec_num": "4"
},
{
"text": "H: Animals get energy for growth and repair from food.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Evaluation Function for NaturalLI",
"sec_num": "4"
},
{
"text": "In addition to requiring reasoning with multiple implicit premises (a concomitant weak point of natural logic), a correct interpretation of the sentence requires fairly nontrivial nonlocal reasoning: Food serves mainly for x \u2192 Animals get x from food.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Evaluation Function for NaturalLI",
"sec_num": "4"
},
{
"text": "Nonetheless, there enough lexical clues in the sentence that even a simple entailment classifier would get the example correct. We build such a classifier and adapt it as an evaluation function inside NaturalLI in case no premises are found dur-ing search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Evaluation Function for NaturalLI",
"sec_num": "4"
},
{
"text": "Our entailment classifier is designed to be as domain independent as possible; therefore we define only 5 unlexicalized real-valued features, with an optional sixth feature encoding the score output by the Solr information extraction system (in turn built upon Lucene). In fact, this classifier is a stronger baseline than it may seem: evaluating the system on RTE-3 (Giampiccolo et al., 2007) yielded 63.75% accuracy -2 points above the median submission.",
"cite_spans": [
{
"start": 367,
"end": 393,
"text": "(Giampiccolo et al., 2007)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Standalone Entailment Classifier",
"sec_num": "4.1"
},
{
"text": "All five of the core features are based on an alignment of keyphrases between the premise and the hypothesis. A keyphrase is defined as a span of text which is either (1) a possibly empty sequence of adjectives and adverbs followed by a sequence of nouns, and optionally followed by either of or the possessive marker ('s), and another noun (e.g., sneaky kitten or pail of water); (2) a possibly empty sequence of adverbs followed by a verb (e.g., quietly pounce); or (3) a gerund followed by a noun (e.g., flowing water). The verb to be is never a keyphrase. We make a distinction between a keyphrase and a keyword -the latter is a single noun, adjective, or verb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Standalone Entailment Classifier",
"sec_num": "4.1"
},
{
"text": "We then align keyphrases in the premise and hypothesis by applying a series of sieves. First, all exact matches are aligned to each other. Then, prefix or suffix matches are aligned, then if either keyphrase contains the other they are aligned as well. Last, we align a keyphrase in the premise p i to a keyphrase in the hypothesis h k if there is an alignment between p i\u22121 and h k\u22121 and between p i+1 and h k+1 . This forces any keyphrase pair which is \"sandwiched\" between aligned pairs to be aligned as well. An example alignment is given in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 546,
"end": 554,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Standalone Entailment Classifier",
"sec_num": "4.1"
},
{
"text": "Features are extracted for the number of alignments, the numbers of alignments which do and do not match perfectly, and the number of keyphrases in the premise and hypothesis which were not aligned. A feature for the Solr score of the premise given the hypothesis is optionally included; we revisit this issue in the evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Standalone Entailment Classifier",
"sec_num": "4.1"
},
{
"text": "A version of the classifier constructed in Section 4.1, but over keywords rather than keyphrases can be incorporated directly into NaturalLI's search to give a score for each candidate premise Heat energy is being transferred when a stove is used to boil water in a pan.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Evaluation Function for Search",
"sec_num": "4.2"
},
{
"text": "When you heat water on a stove, thermal energy is transferred. Figure 3 : An illustration of an alignment between a premise and a hypothesis. Keyphrases can be multiple words (e.g., heat energy), and can be approximately matched (e.g., to thermal energy). In the premise, used, boil and pan are unaligned. Note that heat water is incorrectly tagged as a compound noun.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 71,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "An Evaluation Function for Search",
"sec_num": "4.2"
},
{
"text": "visited. This can be thought of as analogous to the evaluation function in game-playing search -even though an agent cannot play a game of Chess to completion, at some depth it can apply an evaluation function to its leaf states.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Evaluation Function for Search",
"sec_num": "4.2"
},
{
"text": "Using keywords rather than keyphrases is in general a hindrance to the fuzzy alignments the system can produce. Importantly though, this allows the feature values to be computed incrementally as the search progresses, based on the score of the parent state and the mutation or deletion being performed. For instance, if we are deleting a word which was previously aligned perfectly to the premise, we would subtract the weight for a perfect and imperfect alignment, and add the weight for an unaligned premise keyphrase. This has the same effect as applying the trained classifier to the new state, and uses the same weights learned for this classifier, but requires substantially less computation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Evaluation Function for Search",
"sec_num": "4.2"
},
{
"text": "In addition to finding entailments from candidate premises, our system also allows us to encode a notion of likely negation. We can consider the following two statements na\u00efvely sharing every keyword. Each token marked with its polarity:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Evaluation Function for Search",
"sec_num": "4.2"
},
{
"text": "P: some \u2191 cats \u2191 have \u2191 tails \u2191 H: no \u2191 cats \u2193 have \u2193 tails \u2193",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Evaluation Function for Search",
"sec_num": "4.2"
},
{
"text": "However, we note that all of the keyword pairs are in opposite polarity contexts. We can therefore define a pair of keywords as matching in NaturalLI if the following two conditions hold: (1) their lemmatized surface forms match exactly, and (2) they have the same polarity in the sentence. The second constraint encodes a good approximation for negation. To illustrate, consider the polarity signatures of common operators:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Evaluation Function for Search",
"sec_num": "4.2"
},
{
"text": "Subj. polarity Obj. polarity Some, few, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Operators",
"sec_num": null
},
{
"text": "\u2191 \u2191 All, every, etc. \u2193 \u2191 Not all, etc. \u2191 \u2193 No, not, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Operators",
"sec_num": null
},
{
"text": "\u2193 \u2193 Most, many, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Operators",
"sec_num": null
},
{
"text": "-\u2191",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Operators",
"sec_num": null
},
{
"text": "We note that most contradictory operators (e.g., some/no; all/not all) induce the exact opposite polarity on their arguments. Otherwise, pairs of operators which share half their signature are usually compatible with each other (e.g., some and all).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Operators",
"sec_num": null
},
{
"text": "This suggests a criterion for likely negation: If the highest classifier score is produced by a contradictory candidate premise, we have reason to believe that we may have found a contradiction. To illustrate with our example, NaturalLI would mutate no cats have tails to the cats have tails, at which point it has found a contradictory candidate premise which has perfect overlap with the premise some cats have tails. Even had we not found the exact premise, this suggests that the hypothesis is likely false.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Operators",
"sec_num": null
},
{
"text": "This work is similar in many ways to work on recognizing textual entailment -e.g., Schoenmackers et al. (2010) , Berant et al. (2011) , Lewis and Steedman (2013) . In the RTE task, a single premise and a single hypothesis are given as input, and a system must return a judgment of either entailment or nonentailment (in later years, nonentailment is further split into contradiction and independence). These approaches often rely on alignment features, similar to ours, but do not generally scale to large premise sets (i.e., a comprehensive knowledge base). The discourse commitments in Hickl and Bensley (2007) can be thought of as similar to the additional entailed facts we add to the knowledge base (Section 3.3). In another line of work, Tian et al. (2014) approach the RTE problem by parsing into Dependency Compositional Semantics (DCS) (Liang et al., 2011) . This work particularly relevant in that it also incorporates an evaluation function (using distributional similarity) to augment their theorem prover -although in their case, this requires a translation back and forth between DCS and language. Beltagy et al. (To appear 2016) takes a similar approach, but encoding distributional information directly in entailment rules in a Markov Logic Network (Richardson and Domingos, 2006) .",
"cite_spans": [
{
"start": 83,
"end": 110,
"text": "Schoenmackers et al. (2010)",
"ref_id": "BIBREF35"
},
{
"start": 113,
"end": 133,
"text": "Berant et al. (2011)",
"ref_id": "BIBREF4"
},
{
"start": 136,
"end": 161,
"text": "Lewis and Steedman (2013)",
"ref_id": "BIBREF21"
},
{
"start": 588,
"end": 612,
"text": "Hickl and Bensley (2007)",
"ref_id": "BIBREF18"
},
{
"start": 744,
"end": 762,
"text": "Tian et al. (2014)",
"ref_id": "BIBREF36"
},
{
"start": 845,
"end": 865,
"text": "(Liang et al., 2011)",
"ref_id": "BIBREF23"
},
{
"start": 1265,
"end": 1296,
"text": "(Richardson and Domingos, 2006)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Many systems make use of structured knowledge bases for question answering. Semantic parsing methods (Zettlemoyer and Collins, 2005; Liang et al., 2011) use knowledge bases like Freebase to find support for a complex question. Knowledge base completion (e.g., Chen et al. (2013) , Bordes et al. (2011 ), or Riedel et al. (2013 ) can be thought of as entailment, predicting novel knowledge base entries from the original database. In contrast, this work runs inference over arbitrary text without needing a structured knowledge base. Open IE (Wu and Weld, 2010; Mausam et al., 2012) QA approaches -e.g., Fader et al. (2014) are closer to operating over plain text, but still requires structured extractions.",
"cite_spans": [
{
"start": 101,
"end": 132,
"text": "(Zettlemoyer and Collins, 2005;",
"ref_id": "BIBREF39"
},
{
"start": 133,
"end": 152,
"text": "Liang et al., 2011)",
"ref_id": "BIBREF23"
},
{
"start": 260,
"end": 278,
"text": "Chen et al. (2013)",
"ref_id": "BIBREF7"
},
{
"start": 281,
"end": 300,
"text": "Bordes et al. (2011",
"ref_id": "BIBREF6"
},
{
"start": 301,
"end": 326,
"text": "), or Riedel et al. (2013",
"ref_id": "BIBREF33"
},
{
"start": 541,
"end": 560,
"text": "(Wu and Weld, 2010;",
"ref_id": "BIBREF38"
},
{
"start": 561,
"end": 581,
"text": "Mausam et al., 2012)",
"ref_id": "BIBREF28"
},
{
"start": 603,
"end": 622,
"text": "Fader et al. (2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Of course, this work is not alone in attempting to incorporate strict logical reasoning into question answering systems. The COGEX system (Moldovan et al., 2003) incorporates a theorem prover into a QA system, boosting overall performance on the TREC QA task. Similarly, Watson (Ferrucci et al., 2010) incorporates logical reasoning components alongside shallower methods. This work follows a similar vein, but both the theorem prover and lexical classifier operate over text, without requiring either the premises or axioms to be in logical forms.",
"cite_spans": [
{
"start": 138,
"end": 161,
"text": "(Moldovan et al., 2003)",
"ref_id": "BIBREF30"
},
{
"start": 278,
"end": 301,
"text": "(Ferrucci et al., 2010)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "On the Aristo corpus we evaluate on, Hixon et al. (2015) proposes a dialog system to augment a knowledge graph used for answering the questions. This is in a sense an oracle measure, where a human is consulted while answering the question; although, they show that their additional extractions help answer questions other than the one the dialog was collected for.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We evaluate our entailment system on the Regents Science Exam portion of the Aristo dataset (Clark et al., 2013; Clark, 2015) . The dataset consists of a collection of multiple-choice science questions from the New York Regents 4 th Grade Science Exams (NYSED, 2014) . Each multiple choice option is translated to a candidate hypotheses. A large corpus is given as a knowledge base; the task is to find support in this knowledge base for the hypothesis.",
"cite_spans": [
{
"start": 92,
"end": 112,
"text": "(Clark et al., 2013;",
"ref_id": "BIBREF9"
},
{
"start": 113,
"end": 125,
"text": "Clark, 2015)",
"ref_id": "BIBREF12"
},
{
"start": 253,
"end": 266,
"text": "(NYSED, 2014)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "Our system is in many ways well-suited to the dataset. Although certainly many of the facts require complex reasoning (see Section 6.4), the majority can be answered from a single premise. Unlike FraCaS (Cooper et al., 1996) or the RTE challenges, however, the task does not have explicit premises to run inference from, but rather must infer the truth of the hypothesis from a large collection of supporting text.",
"cite_spans": [
{
"start": 203,
"end": 224,
"text": "(Cooper et al., 1996)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "We make use of two collections of unlabeled corpora for our experiments. The first of these is the Barron's study guide (BARRON'S), consisting of 1200 sentences. This is the corpus used by Hixon et al. (2015) for their conversational dialog engine Knowbot, and therefore constitutes a more fair comparison against their results. However, we also make use of the full SCITEXT corpus (Clark et al., 2014) . This corpus consists of 1 316 278 supporting sentences, including the Barron's study guide alongside simple Wikipedia, dictionaries, and a science textbook.",
"cite_spans": [
{
"start": 189,
"end": 208,
"text": "Hixon et al. (2015)",
"ref_id": "BIBREF19"
},
{
"start": 382,
"end": 402,
"text": "(Clark et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Processing",
"sec_num": "6.1"
},
{
"text": "Since we lose all document context when searching over the corpus with NaturalLI, we first pre-process the corpus to resolve high-precision cases of pronominal coreference, via a set of very simple high-precision sieves. This finds the most recent candidate antecedent (NP or named entity) which, in order of preference, matches either the pronoun's animacy, gender, and number. Filtering to remove duplicate sentences and sentences containing non-ASCII characters yields a total of 822 748 facts in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Processing",
"sec_num": "6.1"
},
{
"text": "These sentences were then indexed using Solr. The set of promising premises for the soft alignment in Section 4, as well as the Solr score feature in the lexical classifier (Section 4.1), were obtained by querying Solr using the default similarity metric and scoring function. On the query side, questions were converted to answers using the same methodology as Hixon et al. (2015) . In cases where the question contained multiple sentences, only the last sentence was considered. As discussed in Section 6.4, we do not attempt reasoning over multiple sentences, and the last sentence is likely the most informative sentence in a longer passage.",
"cite_spans": [
{
"start": 362,
"end": 381,
"text": "Hixon et al. (2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Processing",
"sec_num": "6.1"
},
{
"text": "To train a soft entailment classifier, we needed a set of positive and negative entailment instances. These were collected on Mechanical Turk. In particular, for each true hypothesis in the training set and for each sentence in the Barron's study guide, we found the top 8 results from Solr and considered these to be candidate entailments. These were then shown to Turkers, who decided whether the premise entailed the hypothesis, the hypothesis entailed the premise, both, or neither. Note that each pair was shown to only one Turker, lowering the cost of data collection, but consequently resulting in a somewhat noisy dataset. The data was augmented with additional negatives, collected by taking the top 10 Solr results for each false hypothesis in the training set. This yielded a total of 21 306 examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training an Entailment Classifier",
"sec_num": "6.2"
},
{
"text": "The scores returned from NaturalLI incorporate negation in two ways: if NaturalLI finds a contradictory premise, the score is set to zero. If Natu-ralLI finds a soft negation (see Section 4.2), and did not find an explicit supporting premise, the score is discounted by 0.75 -a value tuned on the training set. For all systems, any premise which did not contain the candidate answer to the multiple choice query was discounted by a value tuned on the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training an Entailment Classifier",
"sec_num": "6.2"
},
{
"text": "We present results on the Aristo dataset in Table 1 , alongside prior work and strong baselines. In all cases, NaturalLI is run with the evaluation function enabled; the limited size of the text corpus and the complexity of the questions would cause the basic NaturalLI system to perform poorly. The test set for this corpus consists of only 68 examples, and therefore both perceived large differences in model scores and the apparent best system should be interpreted cautiously. NaturalLI consistently achieves the best training accuracy, and is more stable between configurations on the test set. For instance, it may be consistently discarding lexically similar but actually contradictory premises that often confuse some subset of the baselines.",
"cite_spans": [],
"ref_spans": [
{
"start": 44,
"end": 51,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6.3"
},
{
"text": "KNOWBOT is the dialog system presented in Hixon et al. (2015) . We report numbers for two an associated 500 example training set (and 249 example development set). These are substantially more difficult as they contain a far larger number of questions that require an understanding of a more complex process. Nonetheless, the trend illustrated in Table 1 holds for this larger set, as shown in Table 2 . Note that with a web-scale corpus, accuracy of an IR-based system can be pushed up to 51.4%; a PMI-based solver, in turn, achieves an accuracy of 54.8% -admittedly higher than our best system (Clark et al., 2016) . 3 An interesting avenue of future work would be to run NaturalLI over such a large web-scale corpus, and to incorporate PMI-based statistics into the evaluation function.",
"cite_spans": [
{
"start": 42,
"end": 61,
"text": "Hixon et al. (2015)",
"ref_id": "BIBREF19"
},
{
"start": 596,
"end": 616,
"text": "(Clark et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 619,
"end": 620,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 347,
"end": 354,
"text": "Table 1",
"ref_id": null
},
{
"start": 394,
"end": 401,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6.3"
},
{
"text": "We analyze some common types of errors made by the system on the training set. The most common error can be attributed to the question requiring complex reasoning about multiple premises. 29 of 108 questions in the training set (26%) contain multiple premises. Some of these cases can be recovered from (e.g., This happens because the smooth road has less friction.), while others are trivially out of scope for our method (e.g., The volume of water most likely decreased.). Although there is usually still some signal for which answer is most likely to be correct, these questions are fundamentally out-of-scope for the approach. Another class of errors which deserves mention are cases where a system produces the same score for multiple answers. This occurs fairly frequently in the standalone classifier (7% of examples in training; 4% loss from random guesses), and especially often in NaturalLI (11%; 6% loss from random guesses). This offers some insight into why incorporating other models -even with low weight -can offer significant boosts in the per- 3 Results from personal correspondence with the authors. formance of NaturalLI. Both this and the previous class could be further mitigated by having a notion of a process, as in Berant et al. (2014) .",
"cite_spans": [
{
"start": 1062,
"end": 1063,
"text": "3",
"ref_id": null
},
{
"start": 1241,
"end": 1261,
"text": "Berant et al. (2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.4"
},
{
"text": "Other questions are simply not supported by any single sentence in the corpus. For example, A human offspring can inherit blue eyes has no support in the corpus that does not require significant multi-step inferences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.4"
},
{
"text": "A remaining chunk of errors are simply classification errors. For example, Water freezing is an example of a gas changing to a solid is marked as the best hypothesis, supported incorrectly by An ice cube is an example of matter that changes from a solid to a liquid to a gas, which after mutating water to ice cube matches every keyword in the hypothesis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.4"
},
{
"text": "We have improved NaturalLI to be more robust for question answering by running the inference over dependency trees, pre-computing deletions, and incorporating a soft evaluation function for predicting likely entailments when formal support could not be found. Lastly, we show that relational entailment and meronymy can be elegantly incorporated into natural logic. These features allow us to perform large-scale broad domain question answering, achieving strong results on the Aristo science exams corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "For clarity we describe a simplified semantics here; Nat-uralLI implements the semantics described inIcard and Moss (2014).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Truth values are a trivial partial order corresponding to entailment: if t1 \u2264 t2 (i.e., t1 t2), and you know that t1 is true, then t2 must be true.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their thoughtful comments. We gratefully acknowledge the support of the Allen Institute for Artificial Intelligence, and in particular Peter Clark and Oren Etzioni for valuable discussions, as well as for access to the Aristo corpora and associated preprocessing. We would also like to acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of AI2, DARPA, AFRL, or the US government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": " 2015. The held-out version uses additional facts from other question's dialogs; the oracle version made use of human input on the question it was answering. The test set did not exist at the time KNOWBOT was published. variants of the system: held-out is the system's performance when it is not allowed to use the dialog collected from humans for the example it is answering; oracle is the full system. Note that the oracle variant is a human-in-the-loop system. We additionally present three baselines. The first simply uses Solr's IR confidence to rank entailment (Solr Only in Table 1 ). The max IR score of any premise given a hypothesis is taken as the score for that hypothesis. Furthermore, we report results for the entailment classifier defined in Section 4.1 (Classifier), optionally including the Solr score as a feature. We also report performance of the evaluation function in NaturalLI applied directly to the premise and hypothesis, without any inference (Evaluation Function).Last, we evaluate NaturalLI with the improvements presented in this paper (NaturalLI in Table 1). We additionally tune weights on our training set for a simple model combination with (1) Solr (with weight 6:1 for NaturalLI) and (2) the standalone classifier (with weight 24:1 for Nat-uralLI). Empirically, both parameters were observed to be fairly robust.To demonstrate the system's robustness on a larger dataset, we additionally evaluate on a test set of 250 additional science exam questions, with",
"cite_spans": [],
"ref_spans": [
{
"start": 581,
"end": 588,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "NaturalLI: Natural logic inference for common sense reasoning",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabor Angeli and Christopher D. Manning. 2014. NaturalLI: Natural logic inference for common sense reasoning. In EMNLP.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Leveraging linguistic structure for open domain information extraction",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Melvin Jose Johnson",
"middle": [],
"last": "Premkumar",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguis- tic structure for open domain information extraction. In ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning Open Domain Knowledge From Text",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabor Angeli. 2016. Learning Open Domain Knowl- edge From Text. Ph.D. thesis, Stanford University.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Representing meaning with a combination of logical and distributional models",
"authors": [
{
"first": "Islam",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Pengxiang",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Islam Beltagy, Stephen Roller, Pengxiang Cheng, Ka- trin Erk, and Raymond J. Mooney. To appear, 2016. Representing meaning with a combination of logical and distributional models. Computational Linguis- tics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Global learning of typed entailment rules",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2011. Global learning of typed entailment rules. In Proceedings of ACL, Portland, OR.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Modeling biological processes for reading comprehension",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Srikumar",
"suffix": ""
},
{
"first": "Pei-Chun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Brad",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Abby",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vander Linden",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Brad Huang, Christopher D Manning, Abby Van- der Linden, Brittany Harding, and Peter Clark. 2014. Modeling biological processes for reading comprehension. In Proc. EMNLP.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning structured embeddings of knowledge bases",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2011,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Jason Weston, Ronan Collobert, Yoshua Bengio, et al. 2011. Learning structured embeddings of knowledge bases. In AAAI.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning new facts from knowledge bases with neural tensor networks and semantic word vectors",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3618"
]
},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Richard Socher, Christopher D Man- ning, and Andrew Y Ng. 2013. Learning new facts from knowledge bases with neural tensor net- works and semantic word vectors. arXiv preprint arXiv:1301.3618.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Verbocean: Mining the web for fine-grained semantic verb relations",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Chklovski",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2004,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Chklovski and Patrick Pantel. 2004. Verb- ocean: Mining the web for fine-grained semantic verb relations. In EMNLP.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A study of the knowledge base requirements for passing an elementary science test",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Harrison",
"suffix": ""
},
{
"first": "Niranjan",
"middle": [],
"last": "Balasubramanian",
"suffix": ""
}
],
"year": 2013,
"venue": "AKBC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Clark, Philip Harrison, and Niranjan Balasubra- manian. 2013. A study of the knowledge base re- quirements for passing an elementary science test. In AKBC.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automatic construction of inferencesupporting knowledge bases",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Niranjan",
"middle": [],
"last": "Balasubramanian",
"suffix": ""
},
{
"first": "Sumithra",
"middle": [],
"last": "Bhakthavatsalam",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Humphreys",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Kinkead",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Clark, Niranjan Balasubramanian, Sum- ithra Bhakthavatsalam, Kevin Humphreys, Jesse Kinkead, Ashish Sabharwal, and Oyvind Tafjord. 2014. Automatic construction of inference- supporting knowledge bases. AKBC.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Combining retrieval, statistics, and inference to answer elementary science questions",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Khashabi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Clark, Oren Etzioni, Tushar Khot, Ashish Sab- harwal, Oyvind Tafjord, Peter Turney, and Daniel Khashabi. 2016. Combining retrieval, statistics, and inference to answer elementary science questions.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Elementary school science and math tests as a driver for AI: Take the Aristo challenge! AAAI",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Clark. 2015. Elementary school science and math tests as a driver for AI: Take the Aristo chal- lenge! AAAI.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Using the framework",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Cooper",
"suffix": ""
},
{
"first": "Dick",
"middle": [],
"last": "Crouch",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Van Eijck",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Fox",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Van Genabith",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Jaspars",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Kamp",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Milward",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Pinkal",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Cooper, Dick Crouch, Jan Van Eijck, Chris Fox, Johan Van Genabith, Jan Jaspars, Hans Kamp, David Milward, Manfred Pinkal, Massimo Poesio, et al. 1996. Using the framework. Technical report, The FraCaS Consortium.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Universal Stanford dependencies: A cross-linguistic typology",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Silveira",
"suffix": ""
},
{
"first": "Katri",
"middle": [],
"last": "Haverinen",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine de Marneffe, Timothy Dozat, Na- talia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and Christopher D Manning. 2014. Univer- sal Stanford dependencies: A cross-linguistic typol- ogy. In Proceedings of LREC.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Open question answering over curated and extracted knowledge bases",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2014,
"venue": "KDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In KDD.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The AI behind Watson. The AI Magazine",
"authors": [
{
"first": "David",
"middle": [],
"last": "Ferrucci",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Chu-Carroll",
"suffix": ""
},
{
"first": "Jmes",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Gondek",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Kalyanpur",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lally",
"suffix": ""
},
{
"first": "J",
"middle": [
"William"
],
"last": "Murdock",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nyberg",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Prager",
"suffix": ""
},
{
"first": "Nico",
"middle": [],
"last": "Schlaefer",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Welty",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Ferrucci, Eric Brown, Jennifer Chu-Carroll, Jmes Fan, David Gondek, Aditya Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John Prager, Nico Schlaefer, and Chris Welty. 2010. The AI behind Watson. The AI Magazine.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The third PASCAL recognizing textual entailment challenge",
"authors": [
{
"first": "Danilo",
"middle": [],
"last": "Giampiccolo",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of the ACL-PASCAL workshop on textual entailment and paraphrasing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recog- nizing textual entailment challenge. In Proc. of the ACL-PASCAL workshop on textual entailment and paraphrasing. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A discourse commitment-based framework for recognizing textual entailment",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Hickl",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Bensley",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL-PASCAL Workshop on Textual Entailment and Paraphrasing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Hickl and Jeremy Bensley. 2007. A discourse commitment-based framework for recognizing tex- tual entailment. In ACL-PASCAL Workshop on Tex- tual Entailment and Paraphrasing.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning knowledge graphs for question answering through conversational dialog",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Hixon",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Hixon, Peter Clark, and Hannaneh Hajishirzi. 2015. Learning knowledge graphs for question an- swering through conversational dialog. NAACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Recent progress on monotonicity. Linguistic Issues in Language Technology",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Icard",
"suffix": ""
},
{
"first": ",",
"middle": [],
"last": "Iii",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Moss",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Icard, III and Lawrence Moss. 2014. Recent progress on monotonicity. Linguistic Issues in Lan- guage Technology.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Combined distributional and logical semantics",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2013,
"venue": "TACL",
"volume": "1",
"issue": "",
"pages": "179--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis and Mark Steedman. 2013. Combined distributional and logical semantics. TACL, 1:179- 192.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Corpusbased semantics and pragmatics",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2015,
"venue": "Annual Review of Linguistics",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang and Christopher Potts. 2015. Corpus- based semantics and pragmatics. Annual Review of Linguistics, 1(1).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning dependency-based compositional semantics",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jordan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Michael Jordan, and Dan Klein. 2011. Learning dependency-based compositional seman- tics. In ACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Natural logic for textual inference",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL-PASCAL Workshop on Textual Entailment and Paraphrasing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill MacCartney and Christopher D Manning. 2007. Natural logic for textual inference. In ACL-PASCAL Workshop on Textual Entailment and Paraphrasing.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Modeling semantic containment and exclusion in natural language inference",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill MacCartney and Christopher D Manning. 2008. Modeling semantic containment and exclusion in natural language inference. In Coling.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "An extended model of natural logic",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the eighth international conference on computational semantics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill MacCartney and Christopher D Manning. 2009. An extended model of natural logic. In Proceedings of the eighth international conference on computa- tional semantics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Natural Language Inference",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill MacCartney. 2009. Natural Language Inference. Ph.D. thesis, Stanford.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Open language learning for information extraction",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Mausam",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Bart",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2012,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, and Oren Etzioni. 2012. Open language learning for information extraction. In EMNLP.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Natural language syntax and first-order inference",
"authors": [
{
"first": "A",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Mcallester",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Givan",
"suffix": ""
}
],
"year": 1992,
"venue": "Artificial Intelligence",
"volume": "56",
"issue": "1",
"pages": "1--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David A McAllester and Robert Givan. 1992. Natural language syntax and first-order inference. Artificial Intelligence, 56(1):1-20.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "COGEX: A logic prover for question answering",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Moldovan",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Sanda",
"middle": [],
"last": "Harabagiu",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Maiorano",
"suffix": ""
}
],
"year": 2003,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Moldovan, Christine Clark, Sanda Harabagiu, and Steve Maiorano. 2003. COGEX: A logic prover for question answering. In NAACL.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The grade 4 elementary-level science test",
"authors": [
{
"first": "",
"middle": [],
"last": "Nysed",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "NYSED. 2014. The grade 4 elementary-level science test. http://www.nysedregents. org/Grade4/Science/home.html.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Markov logic networks",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Domingos",
"suffix": ""
}
],
"year": 2006,
"venue": "Machine learning",
"volume": "62",
"issue": "1-2",
"pages": "107--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. Machine learning, 62(1- 2):107-136.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Relation extraction with matrix factorization and universal schemas",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Benjamin M",
"middle": [],
"last": "Marlin",
"suffix": ""
}
],
"year": 2013,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In NAACL-HLT.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Studies on natural logic and categorial grammar",
"authors": [
{
"first": "V\u00edctor",
"middle": [],
"last": "Manuel",
"suffix": ""
},
{
"first": "S\u00e1nchez",
"middle": [],
"last": "Valencia",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V\u00edctor Manuel S\u00e1nchez Valencia. 1991. Studies on natural logic and categorial grammar. Ph.D. thesis, University of Amsterdam.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Learning first-order horn clauses from web text",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Schoenmackers",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Jesse",
"middle": [
"Davis"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2010,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Schoenmackers, Oren Etzioni, Daniel S Weld, and Jesse Davis. 2010. Learning first-order horn clauses from web text. In EMNLP.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Logical inference on dependency-based compositional semantics",
"authors": [
{
"first": "Ran",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Takuya",
"middle": [],
"last": "Matsuzaki",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ran Tian, Yusuke Miyao, and Takuya Matsuzaki. 2014. Logical inference on dependency-based com- positional semantics. In ACL.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Essays in logical semantics",
"authors": [
{
"first": "Johan",
"middle": [],
"last": "Van Benthem",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johan van Benthem. 1986. Essays in logical seman- tics. Springer.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Open information extraction using wikipedia",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daniel S Weld",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Wu and Daniel S Weld. 2010. Open information extraction using wikipedia. In ACL. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars",
"authors": [
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2005,
"venue": "UAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Struc- tured classification with probabilistic categorial grammars. In UAI. AUAI Press.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "felines have a tail H: Some cats have a tail where we must first mutate cat to feline, versus: P: All cats have a tail H: Some felines have a tail",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "An illustration of monotonicity using different partial orders. (a) The monotonicity of all and some in their first arguments, over a domain of denotations. (b) An illustration of the born in monotone operator over the meronymy hierarchy, and the operator is an island as neither monotone or antitone.",
"uris": null,
"type_str": "figure"
}
}
}
}