| { |
| "paper_id": "P97-1019", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:16:05.597439Z" |
| }, |
| "title": "Negative Polarity Licensing at the Syntax-Semantics Interface", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Fry", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stanford University Stanford", |
| "location": { |
| "postCode": "94305-2150", |
| "region": "CA", |
| "country": "USA" |
| } |
| }, |
| "email": "fry@csli@edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Recent work on the syntax-semantics interface (see e.g. (Dalrymple et al., 1994)) uses a fragment of linear logic as a 'glue language' for assembling meanings compositionally. This paper presents a glue language account of how negative polarity items (e.g. ever, any) get licensed within the scope of negative or downward-entailing contexts (Ladusaw, 1979), e.g. Nobody ever left. This treatment of licensing operates precisely at the syntax-semantics interface, since it is carried out entirely within the interface glue language (linear logic). In addition to the account of negative polarity licensing, we show in detail how linear-logic proof nets (Girard, 1987; Gallier, 1992) can be used for efficient meaning deduction within this 'glue language' framework.", |
| "pdf_parse": { |
| "paper_id": "P97-1019", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Recent work on the syntax-semantics interface (see e.g. (Dalrymple et al., 1994)) uses a fragment of linear logic as a 'glue language' for assembling meanings compositionally. This paper presents a glue language account of how negative polarity items (e.g. ever, any) get licensed within the scope of negative or downward-entailing contexts (Ladusaw, 1979), e.g. Nobody ever left. This treatment of licensing operates precisely at the syntax-semantics interface, since it is carried out entirely within the interface glue language (linear logic). In addition to the account of negative polarity licensing, we show in detail how linear-logic proof nets (Girard, 1987; Gallier, 1992) can be used for efficient meaning deduction within this 'glue language' framework.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "A recent strain of research on the interface between syntax and semantics, starting with (Dalrymple et al., 1993) , uses a fragment of linear logic as a 'glue language' for assembling the meaning of a sentence compositionally. In this approach, meaning assembly is guided not by a syntactic constituent tree but rather by the flatter functional structure (the LFG f-structure) of the sentence.", |
| "cite_spans": [ |
| { |
| "start": 89, |
| "end": 113, |
| "text": "(Dalrymple et al., 1993)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As a brief review of this approach, consider sentence (1):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(1) Everyone left.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "1" |
| }, |
| { |
| "text": "[ PRED 'LEAVE'", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Each word in the sentence is associated with a 'meaning constructor' template, specified in the lex-icon; these meaning constructors are then instantiated with values from the f-structure. For sentence (1), this produces two premises of the linear logic glue language:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SUBJ [ ] g PR D 'EWRYONE']", |
| "sec_num": null |
| }, |
| { |
| "text": "everyone:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SUBJ [ ] g PR D 'EWRYONE']", |
| "sec_num": null |
| }, |
| { |
| "text": "left:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SUBJ [ ] g PR D 'EWRYONE']", |
| "sec_num": null |
| }, |
| { |
| "text": "--o H\"-*t every(person, S) g~,',-% X --o fa\"-*t leave (X) In the everyone premise the higher-order variable S ranges over the possible scope meanings of the quantifier, with lower-case x acting as a traditional first-order variable \"placeholder\" within the scope. H ranges over LFG structures corresponding to the meaning of the entire generalized quantifier3", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 57, |
| "text": "(X)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SUBJ [ ] g PR D 'EWRYONE']", |
| "sec_num": null |
| }, |
| { |
| "text": "A meaning for (1) can be derived by applying the linear version of modus ponens, during which (unlike classical logic) the first premise everyone \"consumes\" the second premise left. This deduction, along with the substitutions H ~-~ f~, X ~-~ x and S ~-~ Az.leave(x), produces the final meaning f~\"-*t every (person, Ax.leave(x) ), which is in this simple case the only reading for the sentence.", |
| "cite_spans": [ |
| { |
| "start": 308, |
| "end": 328, |
| "text": "(person, Ax.leave(x)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SUBJ [ ] g PR D 'EWRYONE']", |
| "sec_num": null |
| }, |
| { |
| "text": "One advantage of this deductive style of meaning assembly is that it provides an elegant account of quantifier scoping: each possible scope has a corresponding proof, obviating the need for quantifier storage.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SUBJ [ ] g PR D 'EWRYONE']", |
| "sec_num": null |
| }, |
| { |
| "text": "A proo] net (Girard, 1987) is an undirected, connected graph whose node labels are propositions. A 1Here we have simplified the notation of Dalrymple et al. somewhat, for example by stripping away the universa/ quantifier operators from the variables..In this regard, note that the lower-case variables stand for arbitrary constants rather than particular terms, and generally are given limited scope within the antecedent of the premise. Upper-case variables are Prolog-like variables that become instantiated to specific terms within the proof, and generally their scope is the entire premise. f lg_~,_\"2*~x_)~ H\".*tS_(z)_ ((g~-,~, zF-~ H~.~ s(z) ) ( H',-*t every(person, S) ) \u00b1 g,,',~e -X ((g='~e x) theorem of multiplicative linear logic corresponds to only one proof net; thus the manipulation of proof nets is more efficient than sequent deduction, in which the same theorem might have different proofs corresponding to different orderings of the inference steps. A further advantage of proof nets for our purposes is that an invalid meaning deduction, e.g. one corresponding to some spurious scope reading of a particular sentence, can be illustrated by exhibiting its defective graph which demonstrates visually why no proof exists for it. Proof net techniques have also been exploited within the categorial grammar community, for example for reasons of efficiency (Morrill, 1996) and in order to give logical descriptions of certain syntactic phenomena (Lecomte and Retord, 1995) .", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 26, |
| "text": "(Girard, 1987)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 625, |
| "end": 648, |
| "text": "((g~-,~, zF-~ H~.~ s(z)", |
| "ref_id": null |
| }, |
| { |
| "start": 651, |
| "end": 702, |
| "text": "( H',-*t every(person, S) ) \u00b1 g,,',~e -X ((g='~e x)", |
| "ref_id": null |
| }, |
| { |
| "start": 1373, |
| "end": 1388, |
| "text": "(Morrill, 1996)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1462, |
| "end": 1488, |
| "text": "(Lecomte and Retord, 1995)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning deduction via proof nets", |
| "sec_num": "2" |
| }, |
| { |
| "text": "~ ~ H\"-*t S(x)) \u00ae (H',.** every(person, S)) J-g~-,~ X @ (.f~',~, leave(X)) \u00b1 .f,,\"~t M", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning deduction via proof nets", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this section we construct a proof net from the premises for sentence (1), showing how to apply higher-order unification to the meaning terms in the process. We then review the O(n 2) algorithm of Gallier (1992) for propositional (multiplicative) linear logic which checks whether a given proof net is valid, i.e. corresponds to a proof. The complete process for assembling a meaning from its premises will be shown in four steps: (1) rewrite the premises in a normalized form, (2) assemble the premises into a graph, (3) connect together the positive (\"producer\") and negative (\"consumer\") meaning terms, unifying them in the process, and (4) test whether the resulting graph encodes a proof.", |
| "cite_spans": [ |
| { |
| "start": 199, |
| "end": 213, |
| "text": "Gallier (1992)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning deduction via proof nets", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Since our goal is to derive, from the premises of sentence (1), a meaning M for the f-structure f of the entire sentence, what we seek is a proof of the form", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Step 1: set up the sequent", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "everyone \u00ae left I-fa-,-q M.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Step 1: set up the sequent", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Glue language semantics has so far been restricted to the multiplicative fragment of linear logic, which uses only the multiplicative conjunction operator \u00ae (tensor) and the linear implication operator --o.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Step 1: set up the sequent", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The same fragment is obtained by replacing --o with the operators ~ and \u00b1, where ~ (par) is the multiplicative 'or '2 and \u00b1 is linear negation and (A --o B) -(A \u00b1 ~ B). Using the version without --% we normalize two sided sequents of the form A1, . . . , . . . , B, The proof net further requires that sequents be in negation normal form, in which negation is applied only to atomic terms. 3 Moving the negations inward (the usual double-negation and 'de Morgan' properties hold), and displaying the full premises, we obtain the normalized sequent", |
| "cite_spans": [ |
| { |
| "start": 255, |
| "end": 262, |
| "text": ". . . ,", |
| "ref_id": null |
| }, |
| { |
| "start": 263, |
| "end": 265, |
| "text": "B,", |
| "ref_id": null |
| }, |
| { |
| "start": 266, |
| "end": 266, |
| "text": "", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Step 1: set up the sequent", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "}-((g~-,.%x) \u00b1 ~ H~S(x)) \u00ae(H\"~t every(person, S ) ) \u00b1, g~\"~e X \u00ae (l~-,~t leave(X))', f~',~t M.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Step 1: set up the sequent", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The next step is to create a graph whose nodes consist of all the terms which occur in the sequent. That is, a node is created for each literal C and for each negated literal C'; a node is created for each compound term A \u00ae B or A ~ B; and nodes are also created for its subterms A and B. Then, for each node of the form A ~ B, we draw a soft edge in the form of a horizontal dashed line connecting it to nodes A and B. For each node of the form A\u00aeB, we draw a hard edge (solid line) connecting it to nodes A and B. For the example at hand, this produces the graph in Figure 1 (ignoring the curved edges at the top).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 568, |
| "end": 576, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Step 2: create the graph", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The final step in assembling the proof net is to connect together the literal nodes at the top of the graph. It is at this stage that unification is applied to the variables in order to assign them the values they will assume in the final meaning. Each different way of connecting the literals and instantiating their variables corresponds to a different reading for the sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Step 3: connect the Uterals", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "For each literal, we draw an edge connecting it to a matching literal of opposite sign; i.e. each literal A is connected to a literal B\" where A unifies with B. Every literal in the graph must be connected in this way. If for some literal A there exists no matching literal B of opposite sign then the graph does not encode a proof and the algorithm fails.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Step 3: connect the Uterals", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In this process the unifications apply to whole expressions of the form S-~ M, including both variables over LFG structures and variables over meaning terms. For the meaning terms, this requires a limited higher-order unification scheme that produces the unifier ~x.p (x) from a second-order term T and a first-order term p(z). As noted by Dalrymple et al. (to appear), all the apparatus that is required for their simple intensional meaning language falls within the decidable l)~ fragment of Miller (1990) , and therefore can be implemented as an extension of a first-order unification scheme such as that of Prolog.", |
| "cite_spans": [ |
| { |
| "start": 494, |
| "end": 507, |
| "text": "Miller (1990)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Step 3: connect the Uterals", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "For the example at hand, there is only one way to connect the literals (and hence at most one reading for the sentence), as shown in Figure 1 . At this stage, the unifications would bind the variables in Figure 1 ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 133, |
| "end": 141, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 204, |
| "end": 212, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Step 3: connect the Uterals", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Step 4: test the graph for validity Finally, we apply Gallier's (1992) algorithm to the connected graph in order to check that it corresponds to a proof. This algorithm recursively decomposes the graph from the bottom up while checking for cycles. Here we present the algorithm informally; for proofs of its correctness and O(n 2) time complexity see (Gallier, 1992) .", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 70, |
| "text": "Gallier's (1992)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 351, |
| "end": 366, |
| "text": "(Gallier, 1992)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.4", |
| "sec_num": null |
| }, |
| { |
| "text": "Base case: If the graph consists of a single link between literals A and A -L, the algorithm succeeds and the graph corresponds to a proof.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.4", |
| "sec_num": null |
| }, |
| { |
| "text": "deleting the bottom-level par nodes. If there is some terminal node A ~ B connected to higher nodes A and B, delete A l~ B. This of course eliminates the dashed edge from A ~ B to A and to B, but does not remove nodes A and B. Then run the algorithm on the resulting smaller (possibly unconnected) graph.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recursive case 1: Begin the decomposition by", |
| "sec_num": null |
| }, |
| { |
| "text": "Otherwise, if no terminal par node is available, find a terminal tensor node to delete. This case is more complicated because not every way of deleting a tensor node necessarily leads to success, even for a valid proof net. Just choose some terminal tensor node A \u00ae B. If deleting that node results in a single, connected (i.e. cyclic) graph, then that node was not a valid splitting tensor and a different one must be chosen instead, or else halt with failure if none is available. Otherwise, delete A \u00ae B, which leaves nodes A and B belonging to two unconnected graphs G1 and G2. Then run the algorithm on G1 and G2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recursive case 2:", |
| "sec_num": null |
| }, |
| { |
| "text": "This process will be demonstrated in the examples which follow.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recursive case 2:", |
| "sec_num": null |
| }, |
| { |
| "text": "Ladusaw 1979 This section gives an implementation of NPI licensing at the syntax-semantics interface using glue language. No separate proof or interpretation apparatus is required, only modification of the relevant meaning constructors specified in the lexicon.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A glue language treatment of NPI licensing", |
| "sec_num": "3" |
| }, |
| { |
| "text": "There is a resource-based interpretation of the NPI licensing problem: the negative or decreasing licensing operator must make available a resource, call it e, which will license the NPI's, if any, within its scope. If no such resource is made available the NPI's are unlicensed and the sentence is rejected.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning constructors for NPI's", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "4Here we consider only 'rightward' licensing (within the scope of the quantifier), but this approach applies equally well to 'leftward' licensing (within the restriction). ~t ( f~-,-*t sing(Y)) \u00b1 f~,\",.*t (go\"*, At) \u00b1 g~ \"*e Y \u00ae (f~\"*t sing(Y)) \u00b1 (re,\".*, P \u00ae l) @ ((/~\"-*, yet(P)) \u00b1 ~ l J-) ]~\"-*t M Figure 2 : Invalid proof net of *AI sang yet.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 301, |
| "end": 309, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Meaning constructors for NPI's", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The NPI's must be made to require the l resource. The way one implements such a requirement in linear logic is to put the required resource on the left side of the implication operator --o. This is precisely our approach. However, since the NPI is just 'borrowing' the license, not consuming it (after all, more than one NPI may be licensed, as in No one ever saw anyone), we also add the resource to the right hand side of the implication. That is, for a meaning constructor of the form A --o B, we can make a corresponding NPI meaning constructor of the form (A \u00ae \u00a3) --o (B \u00ae e).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning constructors for NPI's", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For example, the meaning constructor proposed in (Dalrymple et al., 1993) for the sentential modifier obviously is obviously: f~,,,z t P ---o fa\"~t obviously(P).", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 73, |
| "text": "(Dalrymple et al., 1993)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning constructors for NPI's", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Under this analysis of sentential modification, NPI adverbs such as yet or ever would take the same form, but with the licensing apparatus added:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning constructors for NPI's", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "ever:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning constructors for NPI's", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(fa.,~t P \u00ae \u00a3) --o (fa\"*t ever(P) \u00ae g).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning constructors for NPI's", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "This technique can be readily applied to the other categories of NPI as well. In the case of the NPI quantifier phrase anyone 5 the licensing apparatus is added to the earlier template for everyone to produce the meaning constructor anyone:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning constructors for NPI's", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(ga\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning constructors for NPI's", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The only function of the \u00a3 --o \u00a3 pattern inside an NPI is to consume the resource ~ and then produce it again. However, for this to happen, the resource \u00a3 will have to be generated by some licenser whose scope includes the NPI, as we show below. If no outside \u00a3 resource is made available, then the extraneous, unconsumed g material in the NPI guarantees that no proof will be generated. In proof net terms, 5Any also has another, so-called 'free choice' interpretation (as in e.g. Anyone will do) (Ladusaw, 1979; Kadmon and Landman, 1993) , which we ignore here. the output \u00a3 cannot feed back into the input l without producing a cycle.", |
| "cite_spans": [ |
| { |
| "start": 498, |
| "end": 513, |
| "text": "(Ladusaw, 1979;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 514, |
| "end": 539, |
| "text": "Kadmon and Landman, 1993)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "~e X ---o H\"*t S(x) @ \u00a3) ---o (H\"-*t any(person, S) \u00ae \u00a3).", |
| "sec_num": null |
| }, |
| { |
| "text": "We now demonstrate how the deduction is blocked for a sentence containing an unlicensed NPI such as (2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "~e X ---o H\"*t S(x) @ \u00a3) ---o (H\"-*t any(person, S) \u00ae \u00a3).", |
| "sec_num": null |
| }, |
| { |
| "text": "( 2),AI sang yet.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "~e X ---o H\"*t S(x) @ \u00a3) ---o (H\"-*t any(person, S) \u00ae \u00a3).", |
| "sec_num": null |
| }, |
| { |
| "text": "The relevant premises are", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "{[PR .o", |
| "sec_num": null |
| }, |
| { |
| "text": "g~\"* e AI sang:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "AI:", |
| "sec_num": null |
| }, |
| { |
| "text": "g~'~e Y ---o f,,\"*t sing(Y) yet:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "AI:", |
| "sec_num": null |
| }, |
| { |
| "text": "(fa,~,t p \u00ae \u00a3) --o (fa,x,+t yet(P) \u00ae \u00a3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "AI:", |
| "sec_num": null |
| }, |
| { |
| "text": "The graph of (2), shown in Figure 2 , does not encode a proof. The reason is shown in Figure 3 . At this point in the algorithm, we have deleted the leftmost terminal tensor node. However, the only remaining terminal tensor node cannot be deleted, since doing so would produce a single connected subgraph; the cycle is in the edge from \u00a3 to \u00a3\u00b1. At this point the algorithm fails and no meaning is derived.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 27, |
| "end": 35, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 86, |
| "end": 94, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "AI:", |
| "sec_num": null |
| }, |
| { |
| "text": "It is clear from the proposal so far that lexical items which license NPI's must make available a \u00a3 resource within their scope which can be consumed by the NPI. However, that is not enough; a licenser can still occur inside a sentence without an NPI, as in e.g. No one left. The resource accounting of linear logic requires thatwe 'clean up' by consuming any excess \u00a3 resources in order for the meaning deduction to go through. Fortunately, we can solve this problem within the licenser's meaning constructor itself. For a lexical category whose meaning constructor is of the form A--\u00aeB, we assign to the NPI licensers of that category the meaning constructor to introduce 'hypothetical' material. All of the NPI licensing occurs within the hypothetical (left) side of the outermost implication. Since the l resource is made available to the NPI only within this hypothetical, it is guaranteed that the NPI is assembled within, and therefore falls under, the scope of the licenser. Furthermore, the formula is 'self cleaning', in that the \u00a3 resource, even if not used by an NPI, does not survive the hypothetical and so cannot affect the meaning of the licenser in some other way. That is, the licensing constructor (\u00a3 --o (A \u00ae l)) --o B can derive all of the same meanings as the nonlicensing version A --o B.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning constructors for NPI licensers", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Proof We construct the proof net of the equivalent", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fact 1 (g-o(A \u00ae l))--oB F-A--oB", |
| "sec_num": null |
| }, |
| { |
| "text": "right-sided sequent I-(g~ I~ (A \u00ae g)) \u00ae B \u00b1, A \u00b1 , B", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fact 1 (g-o(A \u00ae l))--oB F-A--oB", |
| "sec_num": null |
| }, |
| { |
| "text": "and then test that it is valid.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fact 1 (g-o(A \u00ae l))--oB F-A--oB", |
| "sec_num": null |
| }, |
| { |
| "text": "(\u00a3~I~(A\u00ae\u00a3))\u00aeB \u00b1 A 1B ==~ A \u00b1 B", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fact 1 (g-o(A \u00ae l))--oB F-A--oB", |
| "sec_num": null |
| }, |
| { |
| "text": "::=$ \u00a3\u00b1 A\u00ae~ A \u00b1 ~zg AA \u00b1 []", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fact 1 (g-o(A \u00ae l))--oB F-A--oB", |
| "sec_num": null |
| }, |
| { |
| "text": "This self-cleaning property means that a licensing resource \u00a3 is exactly that--a license. Within the scope of the licenser, the g is available to be used once, several times (in a \"chain\" of NPI's which pass it along), or not at all, as required. 6", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fact 1 (g-o(A \u00ae l))--oB F-A--oB", |
| "sec_num": null |
| }, |
| { |
| "text": "A simple example is provided by the NPIAicensing adverb rarely. We modify our sentential adverb template to create a meaning constructor for rarely which licenses an NPI within the sentence it modifies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fact 1 (g-o(A \u00ae l))--oB F-A--oB", |
| "sec_num": null |
| }, |
| { |
| "text": "rarely:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fact 1 (g-o(A \u00ae l))--oB F-A--oB", |
| "sec_num": null |
| }, |
| { |
| "text": "(\u00a3 -.-o (fa,~t p \u00ae \u00a3)) --.o", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fact 1 (g-o(A \u00ae l))--oB F-A--oB", |
| "sec_num": null |
| }, |
| { |
| "text": "The case of licensing quantifier phrases such as nobody and Jew students follows the same pattern. For example, nobody takes the form nobody:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "fa,,~t rarely(P)", |
| "sec_num": null |
| }, |
| { |
| "text": "We can now derive a meaning for sentence (3), in which nobody and anyone play the roles of licenser and NPI, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "((g#\"*e x \u00ae \u00a3) -o (H\"-*t S(x) \u00ae \u00a3)) --o H\"~t no(person, S).", |
| "sec_num": null |
| }, |
| { |
| "text": "(3) Nobody saw anyone.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "((g#\"*e x \u00ae \u00a3) -o (H\"-*t S(x) \u00ae \u00a3)) --o H\"~t no(person, S).", |
| "sec_num": null |
| }, |
| { |
| "text": "Normally, a sentence with two quantifiers would generate two different scope readings--in this case, (4) and (5).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ":[PREo ' OBODY'] h:[PRED 'ANYONE']", |
| "sec_num": null |
| }, |
| { |
| "text": "(4) f~\"~t no (person, ~x.any(person, Ay.see(x, y) ", |
| "cite_spans": [ |
| { |
| "start": 13, |
| "end": 49, |
| "text": "(person, ~x.any(person, Ay.see(x, y)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ":[PREo ' OBODY'] h:[PRED 'ANYONE']", |
| "sec_num": null |
| }, |
| { |
| "text": "(5) f a\"-* t any (person, Ay.no(person, Ax.see ( x, y ) ", |
| "cite_spans": [ |
| { |
| "start": 17, |
| "end": 55, |
| "text": "(person, Ay.no(person, Ax.see ( x, y )", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") )", |
| "sec_num": null |
| }, |
| { |
| "text": "However, Ladusaw's generalization is that NPI's are licensed within the scope of their licensers. In fact, the semantics of any prevent it from taking wide scope in such a case (Kadmon and Landman, 1993; Ladusaw, 1979, p. 96-101) . Our analysis, then, should derive (4) but block (5).", |
| "cite_spans": [ |
| { |
| "start": 177, |
| "end": 203, |
| "text": "(Kadmon and Landman, 1993;", |
| "ref_id": null |
| }, |
| { |
| "start": 204, |
| "end": 229, |
| "text": "Ladusaw, 1979, p. 96-101)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") )", |
| "sec_num": null |
| }, |
| { |
| "text": "6This multiple-use effect can be achieved more directly using the exponential operator !; however this unnecessary step would take us outside of the multiplicalive fragment of linear logic and preclude the proof net techniques described earlier. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") )", |
| "sec_num": null |
| }, |
| { |
| "text": "The proof net for reading (4) is shown in Figure 4 . T As required, the net in Figure 4 , corresponding to wide scope for no, is valid. The first step in the proof of Figure 4 is to delete the only available splitting tensor, which is boxed in the figure. A second way of linking the positive and negative literals in Figure 4 produces a net which corresponds to (5), the spurious reading in which any has wide scope. In that graph, however, all three of the available terminal tensor nodes produce a single, connected (cyclic) graph if deleted, so decomposition cannot even begin and the algorithm fails. Once again, it is the licensing resources which are enforcing the desired constraint.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 42, |
| "end": 50, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 79, |
| "end": 87, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 167, |
| "end": 175, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 318, |
| "end": 326, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "--o H~-*t no(person, S) (ga',ze X \u00ae ha'x~e Y) --o fa-,~t see(X, Y) (h~.% y --o I~.*, T(y) \u00ae i) --o (I~.,t any(person, T) \u00ae \u00a3)", |
| "sec_num": null |
| }, |
| { |
| "text": "Categorial grammar approaches", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4", |
| "sec_num": null |
| }, |
| { |
| "text": "The \u00a3 atom used here is somewhat analogous to the (negative) lexical 'monotonicity markers' proposed by S~chez Valencia (1991; 1995) and Dowty (1994) for categorial grammar. In these approaches, categories of the form A/B axe marked with monotonicity properties, i.e. as A+/B +, A+/B -, A-/B +, or A-/B-, and similarly for left-leaning categories of the form A\\B. Then monotonicity constraints can be enforced using category assignments like the following from (Dowty, 1994) : VP-/VP-S~chez Valencia and Dowty, however, are less concerned with the distribution of NPI's than they are with using monotonicity properties to characterize valid inference patterns, an issue which we have ignored here. Hence their work emphasizes logical polarity, where an odd number of negative marks indicates negative polarity, and an even number of negatives cancel each other to produce positive polarity. For example, the category of no above \"flips\" the polarity of its argument. By contrast, our system, like Ladusaw's (1979) original proposal, is what Dowty (1994, p. 134-137) would call \"intuitionistic\": ~The subscripts have been stripped from the formulas in order to save space in the diagram. since multiple negative contexts do not cancel each other out, we permit doubly-licensed NPI's as in Nobody rarely sees anyone. To handle such cases, while at the same time accounting for monotonic inference properties, Dowty (1994) proposes a doublemarking framework whereby categories like A-/B + are marked for both logical polarity and syntactic polarity.", |
| "cite_spans": [ |
| { |
| "start": 111, |
| "end": 126, |
| "text": "Valencia (1991;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 127, |
| "end": 132, |
| "text": "1995)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 137, |
| "end": 149, |
| "text": "Dowty (1994)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 461, |
| "end": 474, |
| "text": "(Dowty, 1994)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1041, |
| "end": 1065, |
| "text": "Dowty (1994, p. 134-137)", |
| "ref_id": null |
| }, |
| { |
| "start": 1407, |
| "end": 1419, |
| "text": "Dowty (1994)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4", |
| "sec_num": null |
| }, |
| { |
| "text": "We have elaborated on and extended slightly the 'glue language' approach to semantics of Dalrymple et al. It was shown how linear logic proof nets can be used for efficient natural-language meaning deductions in this framework. We then presented a glue language treatment of negative polarity licensing which ensures that NPI's are licensed within the semantic scope of their licensers, following (Ladusaw, 1979) . This system uses no new global rules or features, nor ambiguous lexical entries, but only the addition of Cs to the relevant items within the lexicon. The licensing takes place precisely at the syntax-semantics interface, since it is implemented entirely in the interface glue language. Finally, we noted briefly some similarities and differences between this system and categorial grammar 'monotonicity marking' approaches.", |
| "cite_spans": [ |
| { |
| "start": 397, |
| "end": 412, |
| "text": "(Ladusaw, 1979)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "2This notation isGallier's (1992). 3Note that we refer to noncompound terms as 'literal' or 'atomic' terms because they are atomic from the point of view of the glue language, even though these terms are in fact of the form S',~ M, where S is an expression over LFG structures and M is a type-r expression in the meaning language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "6 Acknowledgements I'm grateful to Mary Dalrymple, John Lamping and Stanley Peters for very helpful discussions of this material. Vineet Gupta, Martin Kay, Fernando Pereira and four anonymous reviewers also provided helpful comments on several points. All remaining errors are naturally my own.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "LFG semantics via constraints", |
| "authors": [ |
| { |
| "first": "Mary", |
| "middle": [], |
| "last": "Dalrymple", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Lamping", |
| "suffix": "" |
| }, |
| { |
| "first": "Vijay", |
| "middle": [], |
| "last": "Saraswat", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the 6th Meeting of the European Association for Computational Linguistics, University of", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mary Dalrymple, John Lamping, and Vijay Saraswat. 1993. LFG semantics via constraints. In Proceedings of the 6th Meeting of the European Association for Computational Linguistics, Uni- versity of Utrecht, April.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A deductive account of quantification in LFG", |
| "authors": [ |
| { |
| "first": "Mary", |
| "middle": [], |
| "last": "Dalrymple", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Lamping", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| }, |
| { |
| "first": "Vijay", |
| "middle": [], |
| "last": "Saraswat", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mary Dalrymple, John Lamping, Fernando Pereira, and Vijay Saraswat. 1994. A deductive account of quantification in LFG. In Makoto Kanazawa, Christopher J. Pifi6n, and Henriette de Swart, ed- itors, QuantiJ~ers, Deduction, and Context. CSLI Publications, Stanford, CA.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "To appear. Quantifiers, anaphora, and intensionality", |
| "authors": [ |
| { |
| "first": "Mary", |
| "middle": [], |
| "last": "Dalrymple", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Lamping", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| }, |
| { |
| "first": "Vijay", |
| "middle": [], |
| "last": "Saraswat", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Journal of Logic, Language and Information", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mary Dalrymple, John Lamping, Fernando Pereira, and Vijay Saraswat. To appear. Quantifiers, anaphora, and intensionality. Journal of Logic, Language and Information.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The role of negative polarity and concord marking in natural language reasoning", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Dowty", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of SALT IV", |
| "volume": "", |
| "issue": "", |
| "pages": "114--144", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Dowty. 1994. The role of negative polar- ity and concord marking in natural language rea- soning. In Mandy Harvey and Lynn Santelmann, editors, Proceedings of SALT IV, pages 114-144, Ithaca, NY. Cornell University.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Constructive logics. Part II: Linear logic and proof nets", |
| "authors": [ |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Gallier", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jean Gallier. 1992. Constructive logics. Part II: Linear logic and proof nets. MS, Department of Computer and Information Science, University of Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Linear logic", |
| "authors": [ |
| { |
| "first": "Jean-Yves", |
| "middle": [], |
| "last": "Girard", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Theoretical Computer Science", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jean-Yves Girard. 1987. Linear logic. Theoretical Computer Science, 50.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Polarity Sensitivity as Inherent Scope Relations", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "William", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ladusaw", |
| "suffix": "" |
| } |
| ], |
| "year": 1979, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William A. Ladusaw. 1979. Polarity Sensitivity as Inherent Scope Relations. Ph.D. thesis, University of Texas, Austin. Reprinted in Jorge Hankamer, editor, Outstanding Dissertations in Linguistics. Garland, 1980.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Pomset logic as an alternative categorial grammar", |
| "authors": [ |
| { |
| "first": "Alain", |
| "middle": [], |
| "last": "Lecomte", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Retor6", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Formal Grammar. Proceedings of the Conference of the European Summer School in Logic, Language, and Information", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alain Lecomte and Christian Retor6. 1995. Pom- set logic as an alternative categorial grammar. In Glyn V. Morrill and Richard T. Oehrle, editors, Formal Grammar. Proceedings of the Conference of the European Summer School in Logic, Lan- guage, and Information, Barcelona.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A logic programming language with lambda abstraction, function variables and simple unification", |
| "authors": [ |
| { |
| "first": "Dale", |
| "middle": [ |
| "A" |
| ], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Extensions of Logic Programming", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dale A. Miller. 1990. A logic programming language with lambda abstraction, function variables and simple unification. In Peter Schroeder-Heister, ed- itor, Extensions of Logic Programming, Lecture Notes in Artificial Intelligence. Springer-Verlag.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Memoisation of categorial proof nets: parallelism in categorial processing", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Glyn", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Morrill", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the Roma Workshop on Proofs and Linguistic Categories", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Glyn V. Morrill. 1996. Memoisation of categorial proof nets: parallelism in categorial processing. In V. Michele Abrusci and Claudia Casadio, editors, Proceedings of the Roma Workshop on Proofs and Linguistic Categories, Rome.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Studies on Natural Logic and Categorial Grammar", |
| "authors": [ |
| { |
| "first": "Valencia", |
| "middle": [], |
| "last": "Victor Shnchez", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Victor Shnchez Valencia. 1991. Studies on Natu- ral Logic and Categorial Grammar. Ph.D. thesis, University of Amsterdam.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Parsing-driven inference: natural logic", |
| "authors": [ |
| { |
| "first": "Valencia", |
| "middle": [], |
| "last": "Victor Shnchez", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Linguistic Analysis", |
| "volume": "25", |
| "issue": "3-4", |
| "pages": "258--285", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Victor Shnchez Valencia. 1995. Parsing-driven in- ference: natural logic. Linguistic Analysis, 25(3- 4):258-285.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Proof net for Everyone left.", |
| "num": null |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "into right-sided sequents of the form I-A~,..., A: m, B1,..., B,. (In sequent representations of this style, the comma represents \u00ae on the left side of the sequent and ~ on the right side.) In our new format, then, the proof takes the form F everyone \u00b1, left \u00b1 , .f~',ot M.", |
| "num": null |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "as follows: X ~-~ x, H ~-~ f~, S ,-+ )~x.leave(x), M ~+ every(person, )~x.leaue(x)).", |
| "num": null |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "(e -o (A \u00ae t)) --o B.By its logical structure, being embedded inside another implication, the inner implication here serves ~.Y (9.~., At) \u00b1 (].~-'t P @ t) @ ((.f~-., yet(P)) x ~ l ~) J.~-*, M Point of failure. Bottom tensor node cannot be deleted.", |
| "num": null |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "x \u00ae \u00a3) .-o (H\".*t S(x) \u00ae ~))", |
| "num": null |
| } |
| } |
| } |
| } |