ACL-OCL / Base_JSON /prefixJ /json /J18 /J18-2003.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J18-2003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:20:13.086539Z"
},
"title": "Spurious Ambiguity and Focalization",
"authors": [
{
"first": "Glyn",
"middle": [],
"last": "Morrill",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universitat Polit\u00e8cnica de Catalunya Barcelona",
"location": {}
},
"email": "morrill@cs.upc.edu"
},
{
"first": "Oriol",
"middle": [],
"last": "Valent\u00edn",
"suffix": "",
"affiliation": {},
"email": "oriol.valentin@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Spurious ambiguity is the phenomenon whereby distinct derivations in grammar may assign the same structural reading, resulting in redundancy in the parse search space and inefficiency in parsing. Understanding the problem depends on identifying the essential mathematical structure of derivations. This is trivial in the case of context free grammar, where the parse structures are ordered trees; in the case of type logical categorial grammar, the parse structures are proof nets. However, with respect to multiplicatives, intrinsic proof nets have not yet been given for displacement calculus, and proof nets for additives, which have applications to polymorphism, are not easy to characterize. In this context we approach here multiplicative-additive spurious ambiguity by means of the proof-theoretic technique of focalization.",
"pdf_parse": {
"paper_id": "J18-2003",
"_pdf_hash": "",
"abstract": [
{
"text": "Spurious ambiguity is the phenomenon whereby distinct derivations in grammar may assign the same structural reading, resulting in redundancy in the parse search space and inefficiency in parsing. Understanding the problem depends on identifying the essential mathematical structure of derivations. This is trivial in the case of context free grammar, where the parse structures are ordered trees; in the case of type logical categorial grammar, the parse structures are proof nets. However, with respect to multiplicatives, intrinsic proof nets have not yet been given for displacement calculus, and proof nets for additives, which have applications to polymorphism, are not easy to characterize. In this context we approach here multiplicative-additive spurious ambiguity by means of the proof-theoretic technique of focalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In context free grammar (CFG), sequential rewriting derivations exhibit spurious ambiguity: Distinct rewriting derivations may correspond to the same parse structure (tree) and the same structural reading. 1 In this case, it is transparent to develop parsing algorithms avoiding spurious ambiguity by reference to parse trees. In categorial grammar (CG), the problem is more subtle. The Cut-free Lambek sequent proof search space is finite, but involves a combinatorial explosion of spuriously ambiguous sequential proofs. This spurious ambiguity in CG can be understood, analogously to CFG, as involving inessential rule reorderings, which we parallelize in underlying geometric parse structures that are (planar) proof nets.",
"cite_spans": [
{
"start": 206,
"end": 207,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The planarity of Lambek proof nets reflects that the formalism is continuous or concatenative. But the challenge of natural grammar is discontinuity or apparent displacement, whereby there is syntactic/semantic mismatch, or elements appearing out of place. Hence the subsumption of Lambek calculus by displacement calculus D, including intercalation as well as concatenation (Morrill, Valent\u00edn, and Fadda 2011) .",
"cite_spans": [
{
"start": 375,
"end": 410,
"text": "(Morrill, Valent\u00edn, and Fadda 2011)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Proof nets for D must be partially non-planar; steps towards intrinsic correctness criteria for displacement proof nets are made in Fadda (2010) and Moot (2014 Moot ( , 2016 . Additive proof nets are considered in Hughes and van Glabbeck (2005) and Abrusci and Maieli (2016) . However, even in the case of Lambek calculus, it is not clear that in practice parsing by reference to intrinsic criteria (Morril 2011; Moot and Retor\u00e9 2012, Appendix B) is more efficient than parsing by reference to extrinsic criteria of uniform sequent calculus (Miller et al. 1991; Hendriks 1993) . In its turn, on the other hand, uniform proof does not extend to product left rules and product unit left rules, nor to additives. The focalization of Andreoli (1992) is a methodology midway between proof nets and uniform proof. Here, we apply the focusing discipline to the parsing as deduction of D with additives.",
"cite_spans": [
{
"start": 132,
"end": 144,
"text": "Fadda (2010)",
"ref_id": null
},
{
"start": 149,
"end": 159,
"text": "Moot (2014",
"ref_id": "BIBREF26"
},
{
"start": 160,
"end": 173,
"text": "Moot ( , 2016",
"ref_id": "BIBREF27"
},
{
"start": 214,
"end": 244,
"text": "Hughes and van Glabbeck (2005)",
"ref_id": "BIBREF14"
},
{
"start": 249,
"end": 274,
"text": "Abrusci and Maieli (2016)",
"ref_id": "BIBREF0"
},
{
"start": 541,
"end": 561,
"text": "(Miller et al. 1991;",
"ref_id": "BIBREF25"
},
{
"start": 562,
"end": 576,
"text": "Hendriks 1993)",
"ref_id": "BIBREF10"
},
{
"start": 730,
"end": 745,
"text": "Andreoli (1992)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In Chaudhuri, Miller, and Saurin (2008) , multifocusing is defined for unit-free multiplicative-additive linear logic, providing canonical sequent proofs; an eventual goal would be to formulate multifocusing for multiplicative-additive categorial logic and for categorial logic generally. In this respect the present article represents an intermediate step (and includes units, which have linguistic use). Note that Simmons (2012) develops focusing for Lambek calculus with additives, but not for displacement logic, for which we show completeness of focusing here.",
"cite_spans": [
{
"start": 3,
"end": 39,
"text": "Chaudhuri, Miller, and Saurin (2008)",
"ref_id": "BIBREF5"
},
{
"start": 416,
"end": 430,
"text": "Simmons (2012)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The article is structured as follows. In Sections 1.1 and 1.2 we describe spurious ambiguity in context-free grammar and Lambek calculus. In Section 2 we recall the displacement calculus with additives. In Section 3 we contextualize the problem of spurious ambiguity in computational linguistics. In Section 4 we discuss focalization. In Section 5 we present focalization for the displacement calculus with additives. In Section 6 we prove the completeness of focalization for displacement calculus with additives. In Section 7 we exemplify focalization and evaluate it compared with uniform proof. We conclude in Section 8. In Appendix A we prove the auxiliary technical result of Cut-elimination for weak focalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Consider the following production rules:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spurious Ambiguity in CFG",
"sec_num": "1.1"
},
{
"text": "(1) S \u2192 QP VP QP \u2192 Q CN VP \u2192 TV N These generate the following sequential rewriting derivations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spurious Ambiguity in CFG",
"sec_num": "1.1"
},
{
"text": "(2) S \u2192 QP VP \u2192 Q CN VP \u2192 Q CN TV N S \u2192 QP VP \u2192 QP TV N \u2192 Q CN TV N These sequential rewriting derivations correspond to the same parellelized parse structure:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spurious Ambiguity in CFG",
"sec_num": "1.1"
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spurious Ambiguity in CFG",
"sec_num": "1.1"
},
{
"text": "S QP VP Q CN TV N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spurious Ambiguity in CFG",
"sec_num": "1.1"
},
{
"text": "And they correspond to the same structural reading; sequential rewriting in CFG has, then, spurious ambiguity, and the underlying geometric parse structures are ordered trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spurious Ambiguity in CFG",
"sec_num": "1.1"
},
{
"text": "Lambek calculus L (Lambek 1958 ) is a logic of strings with the operation + of concatenation. Recall the definitions of types, configurations, and sequents of L in terms of a set P of primitive types; 2 in Backus Naur form (BNF) notation where F is the set of types, O the set of configurations, and Seq(L) the set of sequents: ",
"cite_spans": [
{
"start": 18,
"end": 30,
"text": "(Lambek 1958",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spurious Ambiguity in CG",
"sec_num": "1.2"
},
{
"text": "(6) \u0393 \u21d2 A \u2206(C) \u21d2 D \\L \u2206(\u0393, A\\C) \u21d2 D A, \u0393 \u21d2 C \\R \u0393 \u21d2 A\\C \u0393 \u21d2 B \u2206(C) \u21d2 D /L \u2206(C/B, \u0393) \u21d2 D \u0393, B \u21d2 C /R \u0393 \u21d2 C/B \u2206(A, B) \u21d2 D \u2022L \u2206(A\u2022B) \u21d2 D \u2206 \u21d2 A \u0393 \u21d2 B \u2022R \u2206, \u0393 \u21d2 A\u2022B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spurious Ambiguity in CG",
"sec_num": "1.2"
},
{
"text": "There is completeness when the types are interpreted in free semigroups or monoids Pentus ( , 1998 .",
"cite_spans": [
{
"start": 83,
"end": 98,
"text": "Pentus ( , 1998",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spurious Ambiguity in CG",
"sec_num": "1.2"
},
{
"text": "Even among Cut-free proofs there is spurious ambiguity; consider, for example, the sequential derivations of Figure 1 . These have the same parallelized parse structure (proof net), given in Figure 2 : The structures that represent Lambek proofs without spurious ambiguity are planar graphs that must satisfy certain global and local properties, and are called proof nets; for a survey see Lamarche and Retor\u00e9 (1996) . Proof nets provide a geometric perspective on derivational equivalence. Alternatively, we may identify the same algebraic parse structure (Curry-Howard term):",
"cite_spans": [
{
"start": 390,
"end": 416,
"text": "Lamarche and Retor\u00e9 (1996)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 109,
"end": 117,
"text": "Figure 1",
"ref_id": null
},
{
"start": 191,
"end": 199,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Spurious Ambiguity in CG",
"sec_num": "1.2"
},
{
"text": "(7) ((x Q x CN ) \u03bbx((x TV x N ) x)) CN \u21d2 CN N \u21d2 N N \u21d2 N S \u21d2 S \\L N, N\\S \u21d2 S /L N, (N\\S)/N, N \u21d2 S \\R (N\\S)/N, N \u21d2 N\\S S \u21d2 S /L S/(N\\S), (N\\S)/N, N \u21d2 S /L (S/(N\\S))/CN, CN, (N\\S)/N, N \u21d2 S N \u21d2 N CN \u21d2 CN N \u21d2 N S \u21d2 S \\L N, N\\S \u21d2 S \\R N\\S \u21d2 N\\S S \u21d2 S /L S/(N\\S), N\\S \u21d2 S /L (S/(N\\S))/CN, CN, N\\S \u21d2 S /L (S/(N\\S))/CN, CN, (N\\S)/N, N \u21d2 S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spurious Ambiguity in CG",
"sec_num": "1.2"
},
{
"text": "Spurious ambiguity in CG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1",
"sec_num": null
},
{
"text": "S \u2022 N \u2022 S \u2022 \\ \u2022 N \u2022 S \u2022 / \u2022 CN \u2022 \\ \u2022 N \u2022 S \u2022 / \u2022 CN \u2022 / \u2022 N \u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1",
"sec_num": null
},
{
"text": "Proof net: The parsing structure of categorial grammar (Morrill 2000) .",
"cite_spans": [
{
"start": 55,
"end": 69,
"text": "(Morrill 2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2",
"sec_num": null
},
{
"text": "Lambek calculus is continuous (planarity). A major issue in grammar is discontinuity, namely, syntax/semantics mismatch (e.g., the fact that quantifier phrases occur in situ but take sentential scope; or \"gapping\" coordination), hence the displacement calculus. This provides a general accommodation of discontinuity in grammar, by contrast with, for example, Combinatory Categorial Grammar (Steedman 2000) , which seeks minimal case by case augmentations of the formalism to deal with quantification, gapping, and so on. 3",
"cite_spans": [
{
"start": 391,
"end": 406,
"text": "(Steedman 2000)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "But",
"sec_num": null
},
{
"text": "The basic categorial grammar of Ajdukiewicz and Bar-Hillel is concatenative/ continuous/projective, and this feature is reflected in the fact that it is contextfree equivalent in weak generative power (Bar-Hillel, Gaifman, and Shamir 1960) . The same is true of the logical categorial grammar of Lambek (1958) , which is still context free in generative power (Pentus 1992) . The main challenge in natural grammar comes from syntax/semantics mismatch or displacement; such nonconcatenativity/discontinuity/non-projectivity is treated in mainstream linguistics by overt movement (e.g., the verb-raising of cross-serial dependencies) and covert movement (e.g., the quantifier-raising of quantification). The displacement calculus is a",
"cite_spans": [
{
"start": 201,
"end": 239,
"text": "(Bar-Hillel, Gaifman, and Shamir 1960)",
"ref_id": "BIBREF2"
},
{
"start": 296,
"end": 309,
"text": "Lambek (1958)",
"ref_id": null
},
{
"start": 360,
"end": 373,
"text": "(Pentus 1992)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "D with Additives, DA",
"sec_num": "2."
},
{
"text": "\u03b1 + \u03b2 = \u03b1 \u03b2 append + : L i , L j \u2192 L i+j \u03b1 1 \u03b3 \u00d7 k \u03b2 = \u03b1 \u03b2 \u03b3 plug \u00d7 k : L i+1 , L j \u2192 L i+j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D with Additives, DA",
"sec_num": "2."
},
{
"text": "Append and plug.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "response to this challenge, which preserves all the good design features of Lambek calculus while extending the generative power and capturing \"movement\" phenomena such as cross-serial dependencies and quantifer-raising. 4 In this section we present displacement calculus D, and a displacement logic DA comprising D with additives. Although D is indeed a conservative extension of the Lambek calculus allowing empty antecedents (L*), we think of it not just as an extension of Lambek calculus but as a generalization, because it involves a whole reformulation to deal with discontinuity while conserving L* as a special case.",
"cite_spans": [
{
"start": 221,
"end": 222,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "Displacement calculus is a logic of discontinuous strings-strings punctuated by a separator 1 and subject to operations of append (+; concatenation) and plug (\u00d7 k ; intercalation at the kth separator, counting from the left); see Figure 3 . Recall the definition of types and their sorts, configurations and their sorts, and sequents, for the displacement calculus with additives (i and j range over the naturals 0, 1, . . .):",
"cite_spans": [],
"ref_spans": [
{
"start": 230,
"end": 238,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "(8) Where P i are primitive types of sort i, Types F i ::= P i F j ::= F i \\F i+j F i ::= F i+j /F j F i+j ::= F i \u2022F j F 0 ::= I F j ::= F i+1 \u2193 k F i+j 1 \u2264 k \u2264 i+1 F i+1 ::= F i+j \u2191 k F j 1 \u2264 k \u2264 i+1 F i+j ::= F i+1 k F j 1 \u2264 k \u2264 i+1 F 1 ::= J F i ::= F i &F i F i ::= F i \u2295F i Sort s(A) = the i s.t. A \u2208 F i For example, s((S\u2191 1 N)\u2191 2 N) = s((S\u2191 1 N)\u2191 1 N) = 2 where s(N) = s(S) = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "Where \u039b is the metasyntactic empty string there is now the BNF definition of configurations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "Configurations O ::= \u039b | T , O T ::= 1 | F 0 | F i>0 {O : . . . : O i O s }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "For example, there is the configuration (S\u2191 1 N)\u2191 2 N{N, 1 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "S\u2191 1 N{N}, S}, 1, N, 1 Sort s(O) = |O| 1 For example s((S\u2191 1 N)\u2191 2 N{N, 1 : S\u2191 1 N{N}, S}, 1, N, 1) = 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "Sequents Seq(DA) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": ":= O \u21d2 A s.t. s(O) = s(A)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "The figure \u2212 \u2192 A of a type A is defined by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "(9) \u2212 \u2192 A = \uf8f1 \uf8f2 \uf8f3 A if s(A) = 0 A{1 : . . . : 1 s(A) 1 s } if s(A) > 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "Where \u0393 is a configuration of sort i and \u2206 1 , . . . , \u2206 i are configurations, the fold \u0393 \u2297 \u2206 1 : . . . : \u2206 i is the result of replacing the successive 1's in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "\u0393 by \u2206 1 , . . . , \u2206 i respectively. For example ((S\u2191 1 N)\u2191 2 N{N, 1 : S\u2191 1 N{N}, S}, 1, N, 1) \u2297 1 : N, N\\S : \u039b is (S\u2191 1 N)\u2191 2 N{N, 1 : S\u2191 1 N{N}, S}, N, N\\S, N.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "Where \u2206 is a configuration of sort i > 0 and \u0393 is a configuration, the kth metalinguistic wrap",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "\u2206 | k \u0393, 1 \u2264 k \u2264 i, is given by (10) \u2206 | k \u0393 = df \u2206 \u2297 1 : . . . : 1 k\u22121 1's : \u0393 : 1 : . . . : 1 i\u2212k 1's",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "that is, \u2206 | k \u0393 is the configuration resulting from replacing by \u0393 the kth separator in \u2206.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "In intuitive terms, syntactical interpretation of displacement calculus is as follows (see Valent\u00edn [2016] for completeness results):",
"cite_spans": [
{
"start": 91,
"end": 106,
"text": "Valent\u00edn [2016]",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "(11) [[A\\C]] = {s 2 | \u2200s 1 \u2208 [[A]], s 1 +s 2 \u2208 [[C]]} [[C/B]] = {s 1 | \u2200s 2 \u2208 [[B]], s 1 +s 2 \u2208 [[C]]} [[A\u2022B]] = {s 1 +s 2 | s 1 \u2208 [[A]] & s 2 \u2208 [[B]]} [[I]] = {0} [[A\u2193 k C]] = {s 2 | \u2200s 1 \u2208 [[A]], s 1 \u00d7 k s 2 \u2208 [[C]]} [[C\u2191 k B]] = {s 1 | \u2200s 2 \u2208 [[B]], s 1 \u00d7 k s 2 \u2208 [[C]]} [[A k B]] = {s 1 \u00d7 k s 2 | s 1 \u2208 [[A]] & s 2 \u2208 [[B]]} [[J]] = {1}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "The logical rules of the displacement calculus with additives are as follows, where the hyperoccurrence notation \u2206 \u0393 abbreviates \u2206 0 (\u0393 \u2297 \u2206 1 : . . . : \u2206 i ), that is, a potentially discontinuous distinguished occurrence \u0393 with external context \u2206 0 and internal contexts \u2206 1 , . . . , \u2206 n : Lambek (1958 Lambek ( , 1988 , which are defined in relation to concatenation, are the basic means of categorial (sub)categorization. The directional divisions over, /, and under, \\, are exemplified by assignments such as the: N/CN for the man: N and sings: N\\S for John sings: S, and loves: (N\\S)/N for John loves Mary: S. Hence, for the man:",
"cite_spans": [
{
"start": 291,
"end": 303,
"text": "Lambek (1958",
"ref_id": null
},
{
"start": 304,
"end": 319,
"text": "Lambek ( , 1988",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "(12) \u0393 \u21d2 A \u2206 \u2212 \u2192 C \u21d2 D \\L \u2206 \u0393, \u2212 \u2212 \u2192 A\\C \u21d2 D \u2212 \u2192 A , \u0393 \u21d2 C \\R \u0393 \u21d2 A\\C \u0393 \u21d2 B \u2206 \u2212 \u2192 C \u21d2 D /L \u2206 \u2212 \u2212 \u2192 C/B, \u0393 \u21d2 D \u0393, \u2212 \u2192 B \u21d2 C /R \u0393 \u21d2 C/B \u2206 \u2212 \u2192 A , \u2212 \u2192 B \u21d2 D \u2022L \u2206 \u2212 \u2212 \u2192 A\u2022B \u21d2 D \u2206 \u21d2 A \u0393 \u21d2 B \u2022R \u2206, \u0393 \u21d2 A\u2022B \u2206 \u039b \u21d2 A IL \u2206 \u2212 \u2192 I \u21d2 A IR \u039b \u21d2 I (13) \u0393 \u21d2 A \u2206 \u2212 \u2192 C \u21d2 D \u2193 k L \u2206 \u0393 | k \u2212 \u2212\u2212 \u2192 A\u2193 k C \u21d2 D \u2212 \u2192 A | k \u0393 \u21d2 C \u2193 k R \u0393 \u21d2 A\u2193 k C \u0393 \u21d2 B \u2206 \u2212 \u2192 C \u21d2 D \u2191 k L \u2206 \u2212\u2212\u2192 C\u2191 k B | k \u0393 \u21d2 D \u0393 | k \u2212 \u2192 B \u21d2 C \u2191 k R \u0393 \u21d2 C\u2191 k B \u2206 \u2212 \u2192 A | k \u2212 \u2192 B \u21d2 D k L \u2206 \u2212 \u2212\u2212 \u2192 A k B \u21d2 D \u2206 \u21d2 A \u0393 \u21d2 B k R \u2206 | k \u0393 \u21d2 A k B \u2206 1 \u21d2 A JL \u2206 \u2212 \u2192 J \u21d2 A JR 1 \u21d2 J (14) \u0393 \u2212 \u2192 A \u21d2 C &L 1 \u0393 \u2212\u2212\u2192 A&B \u21d2 C \u0393 \u2212 \u2192 B \u21d2 C &L 2 \u0393 \u2212\u2212\u2192 A&B \u21d2 C \u0393 \u21d2 A \u0393 \u21d2 B &R \u0393 \u21d2 A&B \u0393 \u2212 \u2192 A \u21d2 C \u0393 \u2212 \u2192 B \u21d2 C \u2295L \u0393 \u2212\u2212\u2192 A\u2295B \u21d2 C \u0393 \u21d2 A \u2295R 1 \u0393 \u21d2 A\u2295B \u0393 \u21d2 B \u2295R 2 \u0393 \u21d2 A\u2295B The continuous multiplicatives {\\, \u2022, /, I} of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "(15) CN \u21d2 CN N \u21d2 N /L N/CN, CN \u21d2 N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "And for John sings and John loves Mary:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "(16) N \u21d2 N S \u21d2 S \\L N, N\\S \u21d2 S N \u21d2 N N \u21d2 N S \u21d2 S \\L N, N\\S \u21d2 S /L N, (N\\S)/N, N \u21d2 S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "The continuous product \u2022 is exemplified by a \"small clause\" assignment such as considers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "(N\\S)/ (N\u2022(CN/CN)) for John considers Mary socialist: S. (17) N \u21d2 N CN \u21d2 CN CN \u21d2 CN /L CN/CN, CN \u21d2 CN /R CN/CN \u21d2 CN/CN \u2022R N, CN/CN \u21d2 N\u2022(CN/CN) N \u21d2 N S \u21d2 S \\L N, N\\S \u21d2 S /L N, (N\\S)/(N\u2022(CN/CN)), N, CN/CN \u21d2 S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "Of course, this use of product is not essential: We could just as well have used ((N\\S)/(CN/CN))/N because in general we have both A/(C\u2022B) \u21d2 (A/B)/C (currying) and (A/B)/C \u21d2 A/(C\u2022B) (uncurrying).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "The discontinuous multiplicatives {\u2193, , \u2191, J}, the displacement connectives, of Morrill and Valent\u00edn (2010) and Morrill, Valent\u00edn, and Fadda (2011) are defined in relation to intercalation. When the value of the k subscript is 1 it may be omitted (i.e., it defaults to one). Circumfixation, or extraction, \u2191, is exemplified by a discontinuous particle-verb assignment calls+1+up: (N\\S)\u2191N for Mary calls the man up: S:",
"cite_spans": [
{
"start": 80,
"end": 107,
"text": "Morrill and Valent\u00edn (2010)",
"ref_id": "BIBREF33"
},
{
"start": 112,
"end": 147,
"text": "Morrill, Valent\u00edn, and Fadda (2011)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "(18) CN \u21d2 CN N \u21d2 N /L N/CN, CN \u21d2 N N \u21d2 N S \u21d2 S \\L N, N\\S \u21d2 S \u2191L N, (N\\S)\u2191N{N/CN, CN} \u21d2 S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "Infixation, \u2193, and extraction together are exemplified by a quantifier assignment everyone: (S\u2191N)\u2193S simulating Montague's S14 rule of quantifying whereby a quantifier phrase can occur where a name can occur (but takes sentential scope):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "(19) . . . , N, . . . \u21d2 S \u2191R . . . , 1, . . . \u21d2 S\u2191N id S \u21d2 S \u2193L . . . , (S\u2191N)\u2193S, . . . \u21d2 S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "Circumfixation and discontinuous product, , are illustrated in an assignment to a relative pronoun that: (CN\\CN)/((S\u2191N) I) allowing both peripheral and medial extraction, that John likes: CN\\CN and that John saw today: CN\\CN:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "(20) N, (N\\S)/N, N \u21d2 S \u2191R N, (N\\S)/N, 1 \u21d2 S\u2191N IL \u21d2 I R N, (N\\S)/N \u21d2 (S\u2191N) I CN\\CN \u21d2 CN\\CN /L (CN\\CN)/((S\u2191N) I), N, (N\\S)/N \u21d2 CN\\CN (21) N, (N\\S)/N, N, S\\S \u21d2 S \u2191R N, (N\\S)/N, 1, S\\S \u21d2 S\u2191N IL \u21d2 I R N, (N\\S)/N, S\\S \u21d2 (S\u2191N) I CN\\CN \u21d2 CN\\CN /L (CN\\CN)/((S\u2191N) I), N, (N\\S)/N, S\\S \u21d2 CN\\CN",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "The additive conjunction and disjunction {&, \u2295} of Lambek (1961) , Morrill (1990) , and Kanazawa (1992) capture polymorphism. For example, the additive conjunction & can be used for rice: N&CN as in rice grows: S and the rice grows: S:",
"cite_spans": [
{
"start": 51,
"end": 64,
"text": "Lambek (1961)",
"ref_id": "BIBREF21"
},
{
"start": 67,
"end": 81,
"text": "Morrill (1990)",
"ref_id": "BIBREF28"
},
{
"start": 88,
"end": 103,
"text": "Kanazawa (1992)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "(22) N \u21d2 N &L 1 N&CN \u21d2 N S \u21d2 S \\L N&CN, N\\S \u21d2 S N/CN, CN, N\\S \u21d2 S &L 2 N/CN, N&CN, N\\S \u21d2 S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "The additive disjunction \u2295 can be used for is: (N\\S)/(N\u2295(CN/CN)) as in Tully is Cicero: S and Tully is humanist: S:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "(23) N \u21d2 N \u2295R 1 N \u21d2 N\u2295(CN/CN) N\\S \u21d2 N\\S /L (N\\S)/(N\u2295(CN/CN)), N \u21d2 N\\S CN/CN \u21d2 CN/CN \u2295R 2 CN/CN \u21d2 N\u2295(CN/CN) N\\S \u21d2 N\\S /L (N\\S)/(N\u2295(CN/CN)), CN/CN \u21d2 N\\S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "This section elaborates the bibliographic context of so-called spurious ambiguity, a problem frequently arising in varieties of parsing of different formalisms. The spurious ambiguity that has been discussed in the literature falls in two broad classes: that for categorial parsing and that for dependency parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Problem of Spurious Ambiguity in Computational Linguistics",
"sec_num": "3."
},
{
"text": "The literature on spurious ambiguity for categorial parsing is represented by the following.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Problem of Spurious Ambiguity in Computational Linguistics",
"sec_num": "3."
},
{
"text": "r Hepple (1990) provides an analysis of normal form theorem proving for Cut-free Lambek calculus without product. Two systems are considered: first, a notion of normalization for this implicative Lambek calculus, and second, a constructive calculus generating all and only the normal forms of the first. The latter consists of applying right rules as much as possible and then switching to left rules, which revert to the right phase in the minor premise and which conserve the value active type in the major premise. Hepple shows that the system is sound and complete and that it delivers a unique proof in each Cut-free semantic equivalence class. This amounts to uniform proof (Miller et al. 1991) , but was developed independently. Retrospectively, it can be seen as focusing for the implicative fragment of Lambek calculus. It is straightforwardly extendible to product right (Hendriks 1993) , but not product left, which requires the deeper understanding of the focusing method.",
"cite_spans": [
{
"start": 2,
"end": 15,
"text": "Hepple (1990)",
"ref_id": "BIBREF11"
},
{
"start": 680,
"end": 700,
"text": "(Miller et al. 1991)",
"ref_id": "BIBREF25"
},
{
"start": 881,
"end": 896,
"text": "(Hendriks 1993)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Problem of Spurious Ambiguity in Computational Linguistics",
"sec_num": "3."
},
{
"text": "r Eisner (1996) provides a normal form framework for Combinatory Categorial Grammar (CCG) with generalized binary composition rules. CCG is a version of categorial grammar with a small number of combinatory schemata (or a version of CFG with an infinite number of non-terminals) in which the basic directional categorial cancellation rules are extended with phrase structure schemata corresponding to combinators of combinatory logic. Eisner, following Hepple and Morrill (1989) , defines a notion of normal form by a restriction on which rule mothers can serve as which daughters of subsequent rules in bottom-up parsing. Eisner notes that his marking of rules has a rough resemblance to the rule regulation of Hendriks (1993) (which is focusing). But whereas the former is Cut-based rule normalization, the latter is Cut-free metarule normalization. Eisner's method applies to a wide range of harmonic and non-harmonic composition rules, but not to type-lifting rules. It has been suggested by G. Penn (personal communication) that it is not clear whether CCG is actually categorial grammar; at least, it is not logical categorial grammar; in any case, the focusing methodology we use here is still normalization, but represents a much more general discipline than that used by Eisner. LG considered, suitable structural linear distributivity postulates facilitate the capacity to describe non-context free patterns. By contrast with displacement calculus D, which is single-conclusioned (it is intuitionistic), and which absorbs the structural properties in the sequent syntax (it has no structural postulates), LG has a non-classical term regime that can assign multiple readings to a proof net, and has semantically non-neutral focusing and defocusing rules. Thus it is quite different from the system considered here in several respects. The multiplicative focusing for LG is a straightforward adaptation of that for linear logic and additives are not addressed; importantly, the backward-chaining focused LG search space still requires the linear distributivity postulates (which leave no trace in the terms assigned), whereas the backward chaining D search space has no structural postulates.",
"cite_spans": [
{
"start": 2,
"end": 15,
"text": "Eisner (1996)",
"ref_id": "BIBREF7"
},
{
"start": 453,
"end": 478,
"text": "Hepple and Morrill (1989)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Problem of Spurious Ambiguity in Computational Linguistics",
"sec_num": "3."
},
{
"text": "The literature on spurious ambiguity for dependency parsing is represented by the following.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Problem of Spurious Ambiguity in Computational Linguistics",
"sec_num": "3."
},
{
"text": "r Huang, Vogel, and Chen (2011) address the problem of word alignment between pairs of corresponding sentences in (statistical) machine translation. Because this task may be very complex (in fact the search space may grow exponentially), a technique called synchronous parsing is used to constrain the search space. However, this approach exhibits the problem of spurious ambiguity, which that paper elaborates in depth. We do not know at this moment whether this work can bring us useful techniques to deal with spurious ambiguity in the field of logic theorem-proving.",
"cite_spans": [
{
"start": 2,
"end": 31,
"text": "Huang, Vogel, and Chen (2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Problem of Spurious Ambiguity in Computational Linguistics",
"sec_num": "3."
},
{
"text": "r Goldberg and Nivre (2012) and Cohen, G\u00f3mez-Rodr\u00edguez, and Satta (2012) focus on the problem of spurious ambiguity of a general technique for dependency parsing called transition-based dependency parsing. The spurious ambiguity investigated in that paper arises in transition systems where different sequences of transitions yield the same dependency tree. The framework of dependency grammars is non-logical in its nature and the spurious ambiguity referred here is completely different from the one addressed in our article, and unfortunately not useful to our work.",
"cite_spans": [
{
"start": 2,
"end": 27,
"text": "Goldberg and Nivre (2012)",
"ref_id": "BIBREF8"
},
{
"start": 32,
"end": 72,
"text": "Cohen, G\u00f3mez-Rodr\u00edguez, and Satta (2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Problem of Spurious Ambiguity in Computational Linguistics",
"sec_num": "3."
},
{
"text": "r Hayashi, Kondo, and Matsumoto (2013) , which is situated in the field of dependency parsing, proposes sophisticated parsing techniques that may have spurious ambiguity. A method is presented in order to deal with it based on normalization and canonicity of sequences of actions of appropriate transition systems.",
"cite_spans": [
{
"start": 2,
"end": 38,
"text": "Hayashi, Kondo, and Matsumoto (2013)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Problem of Spurious Ambiguity in Computational Linguistics",
"sec_num": "3."
},
{
"text": "All these approaches tackle by a kind of normalization the general phenomena of spurious ambiguity, whereby different but equivalent alternative applications of operations result in the same output. Focalization in particular is applicable to logical grammar, in which parsing is deduction. This turns out to be quite a deep methodology, revealing, for example, not only invertible (\"reversible\") rules, which were known to the early proof-theorists, but also dual focusing (\"irreversible\") rules, providing a general discipline for logical normalization, which we now elaborate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Problem of Spurious Ambiguity in Computational Linguistics",
"sec_num": "3."
},
{
"text": "In previous sections we have shown the proof machinery of DA. A further sequent rule is missing, the so-called Cut rule, which incorporates in the sequent calculus the notion of (contextualised) transitivity:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing Spurious Ambiguity: On Focalization",
"sec_num": "4."
},
{
"text": "(24) \u0393 \u21d2 A \u2206 \u2212 \u2192 A \u21d2 B Cut \u2206 \u0393 \u21d2 B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing Spurious Ambiguity: On Focalization",
"sec_num": "4."
},
{
"text": "Note that the Cut rule does not have the property that every type in the premises is a (sub)type of the conclusion (the subformula property), hence it is an obstacle to proof search. Any standard logic should have the Cut rule because this encodes transitivity of the entailment relation, but at the same time a major result of logics presented as sequent calculi is the Cut-elimination theorem, or as it is usually known after Gentzen, Haupsatz. This very important theorem gives as a byproduct the subformula property and decidability in the case of DA. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing Spurious Ambiguity: On Focalization",
"sec_num": "4."
},
{
"text": "\"Reversible\" rules are rules in which the conclusions are provable if and only if the premises are provable. The reversible rules of displacement calculus with additives are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of Cut-Free Proof Search",
"sec_num": "4.1"
},
{
"text": "/R, \\R, \u2022L, IL, \u2191 k R, \u2193 k R, k L, JL",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of Cut-Free Proof Search",
"sec_num": "4.1"
},
{
"text": ", &R, and \u2295L. By way of example, consider /R:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of Cut-Free Proof Search",
"sec_num": "4.1"
},
{
"text": "(25) a. \u0393 \u21d2 C/B b. \u0393, B \u21d2 C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of Cut-Free Proof Search",
"sec_num": "4.1"
},
{
"text": "We can safely reverse sequent (25a) into sequent (25b) because both provability and nonprovability are preserved-that is, \u0393 \u21d2 C/B is provable iff \u0393, B \u21d2 C is provable. We say that the type-occurrence of C/B in the succedent of (25a) is reversible. 6 This means that in the face of alternative reversible rule options a choice can be made arbitrarily and the other possibilities forgotten (don't care nondeterminism). Dually, \"irreversible\" rules are rules in which it is not the case that the conclusions are provable if and only if the premises are provable. The irreversible rules of displacement calculus with additives are: /L, \\L, \u2022R, IR, \u2191 k L, \u2193 k L, k R, JR, &L and \u2295R. By way of example, consider the /L rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of Cut-Free Proof Search",
"sec_num": "4.1"
},
{
"text": "(26) \u0393 \u21d2 B \u2206(C) \u21d2 D /L \u2206(C/B, \u0393) \u21d2 D",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of Cut-Free Proof Search",
"sec_num": "4.1"
},
{
"text": "We cannot assume safely that there exist configurations \u0393 and \u2206(C) that match the antecedent of the end-sequent, and that preserve provability: We do not have that \u2206(C/B, \u0393) \u21d2 D is provable iff \u0393 \u21d2 B and \u2206(C) \u21d2 D are provable. In this case, we say that the distinguished type-occurrence C/B in the above sequent is irreversible. 7 In the face of alternative irreversible rule options, the choice matters and different possibilities must be tried (don't know nondeterminism).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of Cut-Free Proof Search",
"sec_num": "4.1"
},
{
"text": "Rules. Consider the following sequents where two distinguished type-occurrences have been underlined:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On Reversible",
"sec_num": "4.1.1"
},
{
"text": "((N\\S)/N)/N, N \u2022 N \u21d2 N\\S ((N\\S)/N)/N, N \u2022 N \u21d2 N\\S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On Reversible",
"sec_num": "4.1.1"
},
{
"text": "There are the following two commutative derivation fragments:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On Reversible",
"sec_num": "4.1.1"
},
{
"text": "N, ((N\\S)/N)/N, N, N \u21d2 S \u2022L N, ((N\\S)/N)/N, N\u2022N \u21d2 S \\R ((N\\S)/N)/N, N\u2022N \u21d2 N\\S N, ((N\\S)/N)/N, N, N \u21d2 S \\R ((N\\S)/N)/N, N, N \u21d2 N\\S \u2022L ((N\\S)/N)/N, N\u2022N \u21d2 N\\S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On Reversible",
"sec_num": "4.1.1"
},
{
"text": "r Applying the two rules in either order, both (proof) derivations have the same syntactic structure (and the same semantic lambda-term labeling, which we introduce later when we present focused calculus).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On Reversible",
"sec_num": "4.1.1"
},
{
"text": "r The reversible rules \u2022L and \\R commute in the order of application in the proof search. These commutations contribute to spurious ambiguity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On Reversible",
"sec_num": "4.1.1"
},
{
"text": "r Reversible rules can be applied don't care nondeterministically. This result was already known by proof-theorists in the 1930s (Gentzen, and others such as Kleene). These rules were called invertible rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On Reversible",
"sec_num": "4.1.1"
},
{
"text": "CN \u21d2 CN /L N, (N\\S)/N, N \u21d2 S \\R (N\\S)/N, N \u21d2 N\\S S \u21d2 S /L S/(N\\S), (N\\S)/N, N \u21d2 S /L (S/(N\\S))/CN, CN, (N\\S)/N, N \u21d2 S N \u21d2 N CN \u21d2 CN S/(N\\S), N\\S \u21d2 S /L (S/(N\\S))/CN, CN, N\\S \u21d2 S /L (S/(N\\S))/CN, CN, (N\\S)/N, N \u21d2 S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On Irreversible Rules. Consider the following:",
"sec_num": "4.1.2"
},
{
"text": "r Dually, both (proof) derivations have the same syntactic structure, (i.e., proof net), as well as the same semantic term labeling codifying the structure of derivations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On Irreversible Rules. Consider the following:",
"sec_num": "4.1.2"
},
{
"text": "r The irreversible rules for / (S/(N\\S))/CN and / (N\\S)/N commute in the order of application in the proof search. These commutations also contribute to spurious ambiguity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On Irreversible Rules. Consider the following:",
"sec_num": "4.1.2"
},
{
"text": "r Contrary to the behavior of reversible types, the choice of the active type (the type decomposed) is critical when it is irreversible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "On Irreversible Rules. Consider the following:",
"sec_num": "4.1.2"
},
{
"text": "r A combinatorial explosion in the finite Cut-free proof search space due to the aforementioned rule commutativity. r This problem is exacerbated in displacement calculus because there are more connectives and hence more rules giving rise to spurious ambiguities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consequences",
"sec_num": "4.2"
},
{
"text": "r A good partial solution: the discipline of focalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consequences",
"sec_num": "4.2"
},
{
"text": "Reversible rules commute in their order of application and their key property is reversability. Dually, irreversible rules also commute and their key property (completely unknown to the early proof-theorists 8 ) is so-called focalization, which was defined by Andreoli in Andreoli (1992) .",
"cite_spans": [
{
"start": 260,
"end": 287,
"text": "Andreoli in Andreoli (1992)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Toward a (Considerable) Elimination of Spurious Ambiguity: Focalization",
"sec_num": "4.3"
},
{
"text": "Given a sequent, reversible rules are applied in a don't care nondeterministic fashion until no longer possible. When there are no occurrences of reversible types, one chooses an irreversible type don't know nondeterministically as active type-occurrence; we say that this type occurrence is in focus, or that it is focalized; and one applies proof search to its subtypes while these remain irreversible. When one finds a reversible type, reversible rules are applied in a don't care nondeterministic fashion again until no longer possible, when another irreversible type is chosen, and so on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Discipline of Focalization",
"sec_num": "4.4"
},
{
"text": "A \"Gymkhana in the Proof-search\". Let us consider the following provable sequent with types Q = S/(N\\S), and 27has the algebraic Curry-Howard term:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Non-focalized Proof:",
"sec_num": "4.4.1"
},
{
"text": "TV = (N\\S)/N. (27) Q/CN, CN, TV, N \u21d2 S Sequent",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Non-focalized Proof:",
"sec_num": "4.4.1"
},
{
"text": "(28) ((x Q/CN x CN ) \u03bbx((x TV x N ) x))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Non-focalized Proof:",
"sec_num": "4.4.1"
},
{
"text": "Consider the proof fragment:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Non-focalized Proof:",
"sec_num": "4.4.1"
},
{
"text": "(29) CN \u21d2 CN N \u21d2 N . . . /L Q, N\\S \u21d2 S /L Q, TV, N \u21d2 S /L Q/CN, CN, TV, N \u21d2 S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Non-focalized Proof:",
"sec_num": "4.4.1"
},
{
"text": "This fragment of proof concludes into a correct proof derivation. But crucially, the discipline of focalization is not applied resulting-in the words of the father of linear logic-in a gymkhana! (In the sense of jumping back and forth.) The foci on irreversible types are not preserved and alternating irreversible types are chosen. Concretely, the underlines show how the active irreversible type in the right premise is not a subtype of the active irreversible type in the endsequent. The discipline of Andreoli's focalization consists of maintaining the focus on the irreversible subtypes, hence reducing the search space and reducing spurious ambiguity. This rests on the focusing property: that if an irreversible rule instance leads to a proof, it leads to a proof in which the subtypes of the active type are the active types of the premises, if they are irreversible. This is the remarkable proof-theoretic discovery of Andreoli, which fortunately can be applied in the present case as this article explains. The discipline of focalization reduces dramatically the combinatorial explosion of categorial proof-search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Non-focalized Proof:",
"sec_num": "4.4.1"
},
{
"text": "A fragment of focalized proof derivation for (27) is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Focalized Proof Derivation.",
"sec_num": "4.4.2"
},
{
"text": "(30) CN \u21d2 CN . . . \\R TV, N \u21d2 N\\S S \u21d2 S /L Q, TV, N \u21d2 S /L Q/CN, CN, TV, N \u21d2 S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Focalized Proof Derivation.",
"sec_num": "4.4.2"
},
{
"text": "Note that the focus is correctly propagated to the irreversible subtype of the focused irreversible type in the endsequent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Focalized Proof Derivation.",
"sec_num": "4.4.2"
},
{
"text": "r What happens with literal types? Can they be considered reversible or irreversible? What happens then in the focalized proof-search paradigm?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Last Ingredient in the Focalization Discipline",
"sec_num": "4.5"
},
{
"text": "r The answers: Literals can be assigned what is called in the literature a reversible or irreversible bias in any manner according to which they belong to the set of reversible or irreversible types. (As we state later we leave open the question of which biases may be favorable.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Last Ingredient in the Focalization Discipline",
"sec_num": "4.5"
},
{
"text": "Depending on the context, we will say compound reversible or irreversible types, or atomic reversible or irreversible types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Last Ingredient in the Focalization Discipline",
"sec_num": "4.5"
},
{
"text": "In focalization, situated (in the antecedent of a sequent, input: \u2022 ; in the succedent of a sequent, output: \u2022 ) compound types are classified as of reversible or irreversible polarity according to whether their associated rule is reversible or not; situated atoms obviously do not have associated rules, but we can extend the concept of polarity to them, overloading the terms reversible and irreversible, applying the previously mentioned concept of bias. We define in Example (31) a BNF grammar for arbitrary types (observe that atomic types are classified as either At \u2022 or At \u2022 ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Focalization for DA",
"sec_num": "5."
},
{
"text": "(31) P ::= At \u2022 | A\u2022B | I | A k B | J | A\u2295B Q ::= At \u2022 | C/B | A\\C | C\u2191 k B | A\u2193 k C | A&B P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Focalization for DA",
"sec_num": "5."
},
{
"text": "and Q in Example (31) denote reversible types (including atomic types) occurring in input and output position, respectively. Dually, if P and Q occur in the output and input position, respectively, then they are said to occur with irreversible polarity. In the atomic case, the same terms (reversible and irreversible) are also used. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Focalization for DA",
"sec_num": "5."
},
{
"text": "A : x, \u0393 \u21d2 C: \u03c7 \u00ac foc \\R \u0393 \u21d2 A\\C: \u03bbx\u03c7 \u00ac foc \u2227 rev \u0393, \u2212 \u2192 B : y \u21d2 C: \u03c7 \u00ac foc /R \u0393 \u21d2 C/B: \u03bby\u03c7 \u00ac foc \u2227 rev \u2206 \u2212 \u2192 A : x, \u2212 \u2192 B : y \u21d2 D: \u03c9 \u00ac foc \u2022L \u2206 \u2212 \u2212 \u2192 A\u2022B: z \u21d2 D: \u03c9{\u03c0 1 z/x, \u03c0 2 z/y} \u00ac foc \u2227 rev \u2206 \u039b \u21d2 A: \u03c6 \u00ac foc IL \u2206 \u2212 \u2192 I : x \u21d2 A: \u03c6 \u00ac foc \u2227 rev \u2212 \u2192 A : x | k \u0393 \u21d2 C: \u03c7 \u00ac foc \u2193 k R \u0393 \u21d2 A\u2193 k C: \u03bbx\u03c7 \u00ac foc \u2227 rev \u0393 | k \u2212 \u2192 B : y \u21d2 C: \u03c7 \u00ac foc \u2191 k R \u0393 \u21d2 C\u2191 k B: \u03bby\u03c7 \u00ac foc \u2227 rev \u2206 \u2212 \u2192 A : x | k \u2212 \u2192 B : y \u21d2 D: \u03c9 \u00ac foc k L \u2206 \u2212 \u2212\u2212 \u2192 A k B: z \u21d2 D: \u03c9{\u03c0 1 z/x, \u03c0 2 z/y} \u00ac foc \u2227 rev \u2206 1 \u21d2 A: \u03c6 \u00ac foc JL \u2206 \u2212 \u2192 J : x \u21d2 A: \u03c6 \u00ac foc \u2227 rev Figure 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212 \u2192",
"sec_num": null
},
{
"text": "Reversible multiplicative rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212 \u2192",
"sec_num": null
},
{
"text": "\u0393 \u21d2 A: \u03c6 \u00ac foc \u0393 \u21d2 B: \u03c8 \u00ac foc &R \u0393 \u21d2 A&B: (\u03c6, \u03c8) \u00ac foc \u2227 rev \u0393 \u2212 \u2192 A : x \u21d2 C: \u03c7 1 \u00ac foc \u0393 \u2212 \u2192 B : y \u21d2 C: \u03c7 2 \u00ac foc \u2295L \u0393 \u2212\u2212\u2192 A\u2295B: z \u21d2 C: z \u2192 x.\u03c7 1 ; y.\u03c7 2 \u00ac foc \u2227 rev Figure 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212 \u2192",
"sec_num": null
},
{
"text": "Reversible additive rules. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212 \u2192",
"sec_num": null
},
{
"text": "\u0393 \u21d2 P : \u03c6 foc \u2227 \u00ac rev \u2206 \u2212 \u2192 Q : z \u21d2 D: \u03c9 foc \u2227 \u00ac rev \\L \u2206 \u0393, \u2212\u2212\u2212\u2192 P\\Q : y \u21d2 D: \u03c9{(y \u03c6)/z} foc \u2227 \u00ac rev \u0393 \u21d2 P 1 : \u03c6 foc \u2227 \u00ac rev \u2206 \u2212 \u2192 P 2 : z \u21d2 D: \u03c9 \u00ac foc \\L \u2206 \u0393, \u2212\u2212\u2212\u2212\u2192 P 1 \\P 2 : y \u21d2 D: \u03c9{(y \u03c6)/z} foc \u2227 \u00ac rev \u0393 \u21d2 Q 1 : \u03c6 \u00ac foc \u2206 \u2212 \u2212 \u2192 Q 2 : z \u21d2 D: \u03c9 foc \u2227 \u00ac rev \\L \u2206 \u0393, \u2212 \u2212\u2212\u2212\u2212 \u2192 Q 1 \\Q 2 : y \u21d2 D: \u03c9{(y \u03c6)/z} foc \u2227 \u00ac rev \u0393 \u21d2 Q: \u03c6 \u00ac foc \u2206 \u2212 \u2192 P : z \u21d2 D: \u03c9 \u00ac foc \\L \u2206 \u0393, \u2212\u2212\u2212\u2192 Q\\P : y \u21d2 D: \u03c9{(y \u03c6)/z} foc \u2227 \u00ac rev \u0393 \u21d2 P : \u03c8 foc \u2227 \u00ac rev \u2206 \u2212 \u2192 Q : z \u21d2 D: \u03c9 foc \u2227 \u00ac rev /L \u2206 \u2212\u2212\u2212\u2192 Q/P : x, \u0393 \u21d2 D: \u03c9{(x \u03c8)/z} foc \u2227 \u00ac rev \u0393 \u21d2 Q 1 : \u03c8 \u00ac foc \u2206 \u2212 \u2212 \u2192 Q 2 : z \u21d2 D: \u03c9 foc \u2227 \u00ac rev /L \u2206 \u2212 \u2212\u2212\u2212\u2212 \u2192 Q 2 /Q 1 : x, \u0393 \u21d2 D: \u03c9{(x \u03c8)/z} foc \u2227 \u00ac rev \u0393 \u21d2 P 1 : \u03c8 foc \u2227 \u00ac rev \u2206 \u2212 \u2192 P 2 : z \u21d2 D: \u03c9 \u00ac foc /L \u2206 \u2212\u2212\u2212\u2212\u2192 P 2 /P 1 : x, \u0393 \u21d2 D: \u03c9{(x \u03c8)/z} foc \u2227 \u00ac rev \u0393 \u21d2 Q: \u03c8 \u00ac foc \u2206 \u2212 \u2192 P : z \u21d2 D: \u03c9 \u00ac foc /L \u2206 \u2212 \u2212\u2212 \u2192 P/Q : x, \u0393 \u21d2 D: \u03c9{(x \u03c8)/z} foc \u2227 \u00ac rev",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212 \u2192",
"sec_num": null
},
{
"text": "In order to prove that DA is complete with respect to focalization, we define a logic DA Foc with the following features: (a) the set of configurations O is extended to the set O box , (b) the set of sequents Seq(DA) is extended to the set Seq(DA Foc ), (c) a new set of logical rules. The set of configurations O box contains O, and in addition it contains boxed configurations, by which we understand configurations where a unique irreversible type-occurrence is decorated with a box, thus: A . The set of sequents Seq(DA Foc ) includes DA sequents with possibly a box in the sequent. We have then Seq(DA) Seq(DA Foc ). Sequents of Seq(DA Foc ) can contain at most one boxed type-occurrence. The meaning of such a box is to mark in the proof-search an irreversible (possibly atomic) type-occurrence either in the antecedent or in the succedent of a sequent. We will say that such a sequent is focalized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Completeness of Focalization for DA",
"sec_num": "6."
},
{
"text": "\u0393 \u21d2 P : \u03c6 foc \u2227 \u00ac rev \u2206 \u2212 \u2192 Q : z \u21d2 D: \u03c9 foc \u2227 \u00ac rev \u2193 k L \u2206 \u0393 | k \u2212 \u2212\u2212\u2212 \u2192 P\u2193 k Q : y \u21d2 D: \u03c9{(y \u03c6)/z} foc \u2227 \u00ac rev \u0393 \u21d2 P 1 : \u03c6 foc \u2227 \u00ac rev \u2206 \u2212 \u2192 P 2 : z \u21d2 D: \u03c9 \u00ac foc \u2193 k L \u2206 \u0393 | k \u2212 \u2212\u2212\u2212\u2212 \u2192 P 1 \u2193 k P 2 : y \u21d2 D: \u03c9{(y \u03c6)/z} foc \u2227 \u00ac rev \u0393 \u21d2 Q 1 : \u03c6 \u00ac foc \u2206 \u2212 \u2212 \u2192 Q 2 : z \u21d2 D: \u03c9 foc \u2227 \u00ac rev \u2193 k L \u2206 \u0393 | k \u2212 \u2212\u2212\u2212\u2212\u2212 \u2192 Q 1 \u2193 k Q 2 : y \u21d2 D: \u03c9{(y \u03c6)/z} foc \u2227 \u00ac rev \u0393 \u21d2 Q: \u03c6 \u00ac foc \u2206 \u2212 \u2192 P : z \u21d2 D: \u03c9 \u00ac foc \u2193 k L \u2206 \u0393 | k \u2212 \u2212\u2212\u2212 \u2192 Q\u2193 k P : y \u21d2 D: \u03c9{(y \u03c6)/z} foc \u2227 \u00ac rev \u0393 \u21d2 P : \u03c8 foc \u2227 \u00ac rev \u2206 \u2212 \u2192 Q : z \u21d2 D: \u03c9 foc \u2227 \u00ac rev \u2191 k L \u2206 \u2212 \u2212\u2212\u2212 \u2192 Q\u2191 k P : x | k \u0393 \u21d2 D: \u03c9{(x \u03c8)/z} foc \u2227 \u00ac rev \u0393 \u21d2 Q 1 : \u03c8 \u00ac foc \u2206 \u2212 \u2212 \u2192 Q 2 : z \u21d2 D: \u03c9 foc \u2227 \u00ac rev \u2191 k L \u2206 \u2212 \u2212\u2212\u2212\u2212\u2212 \u2192 Q 2 \u2191 k Q 1 : x | k \u0393 \u21d2 D: \u03c9{(x \u03c8)/z} foc \u2227 \u00ac rev \u0393 \u21d2 P 1 : \u03c8 foc \u2227 \u00ac rev \u2206 \u2212 \u2192 P 2 : z \u21d2 D: \u03c9 \u00ac foc \u2191 k L \u2206 \u2212 \u2212\u2212\u2212\u2212 \u2192 P 2 \u2191 k P 1 : x | k \u0393 \u21d2 D: \u03c9{(x \u03c8)/z} foc \u2227 \u00ac rev \u0393 \u21d2 Q: \u03c8 \u00ac foc \u2206 \u2212 \u2192 P : z \u21d2 D: \u03c9 \u00ac foc \u2191 k L \u2206 \u2212 \u2212\u2212\u2212 \u2192 P\u2191 k Q : x | k \u0393 \u21d2 D: \u03c9{(x \u03c8)/z} foc \u2227 \u00ac rev",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Completeness of Focalization for DA",
"sec_num": "6."
},
{
"text": "Additionally, the set Seq(DA Foc ) is constrained as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Completeness of Focalization for DA",
"sec_num": "6."
},
{
"text": "(33) Boxed sequents cannot contain compound reversible types",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Completeness of Focalization for DA",
"sec_num": "6."
},
{
"text": "We will use judgements foc and rev on DA Foc -sequents. Where S \u2208 Seq(DA Foc ), S foc means that S contains a boxed type occurrence, and S rev means that there is a complex reversible type occurrence. It follows that Constraint (33) can be judged as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Completeness of Focalization for DA",
"sec_num": "6."
},
{
"text": "\u0393 \u2212 \u2192 Q : x \u21d2 C: \u03c7 foc \u2227 \u00ac rev &L 1 \u0393 \u2212 \u2212\u2212\u2212 \u2192 Q&B : z \u21d2 C: \u03c7{\u03c0 1 z/x} foc \u2227 \u00ac rev \u0393 \u2212 \u2192 P : x \u21d2 C: \u03c7 \u00ac foc &L 1 \u0393 \u2212\u2212\u2212\u2192 P&B : z \u21d2 C: \u03c7{\u03c0 1 z/x} foc \u2227 \u00ac rev \u0393 \u2212 \u2192 Q : y \u21d2 C: \u03c7 foc \u2227 \u00ac rev &L 2 \u0393 \u2212 \u2212\u2212\u2212 \u2192 A&Q : z \u21d2 C: \u03c7{\u03c0 2 z/y} foc \u2227 \u00ac rev \u0393 \u2212 \u2192 P : y \u21d2 C: \u03c7 \u00ac foc &L 2 \u0393 \u2212 \u2212\u2212\u2212 \u2192 A&P : z \u21d2 C: \u03c7{\u03c0 2 z/y} foc \u2227 \u00ac rev Figure 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Completeness of Focalization for DA",
"sec_num": "6."
},
{
"text": "Left irreversible additive rules. S foc \u2227 \u00ac rev. The judgment S\u00ac foc means that S is not focalized (and so may contain reversible type-occurrences).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Completeness of Focalization for DA",
"sec_num": "6."
},
{
"text": "\u2206 \u21d2 P 1 : \u03c6 foc \u2227 \u00ac rev \u0393 \u21d2 P 2 : \u03c8 foc \u2227 \u00ac rev \u2022R \u2206, \u0393 \u21d2 P 1 \u2022P 2 : (\u03c6, \u03c8) foc \u2227 \u00ac rev \u2206 \u21d2 P : \u03c6 foc \u2227 \u00ac rev \u0393 \u21d2 Q: \u03c8 \u00ac foc \u2022R \u2206, \u0393 \u21d2 P\u2022Q : (\u03c6, \u03c8) foc \u2227 \u00ac rev \u2206 \u21d2 N: \u03c6 \u00ac foc \u0393 \u21d2 P : \u03c8 foc \u2227 \u00ac rev \u2022R \u2206, \u0393 \u21d2 N\u2022P : (\u03c6, \u03c8) foc \u2227 \u00ac rev \u2206 \u21d2 N 1 : \u03c6\u00ac foc \u0393 \u21d2 N 2 : \u03c8 foc \u2022R \u2206, \u0393 \u21d2 N 1 \u2022N 2 : (\u03c6, \u03c8) foc \u2227 \u00ac rev IR \u039b \u21d2 I : 0 foc \u2227 \u00ac rev",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Completeness of Focalization for DA",
"sec_num": "6."
},
{
"text": "The rules for focused inference are given in Figures 4-12 , where sequents are accompanied by judgments: focalized or not focalized and reversible or not reversible. The constraint (33) is key in determining the judgments; in the figures showing irreversible additive or multiplicative rules, when the premise is not focalized only the corresponding subtype can be reversible. This system, which is Cut-free, is what we call strongly focalized and induces maximal alternating phases of reversible and irreversible rule",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 57,
"text": "Figures 4-12",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Completeness of Focalization for DA",
"sec_num": "6."
},
{
"text": "\u2206 \u21d2 P 1 : \u03c6 foc \u2227 \u00ac rev \u0393 \u21d2 P 2 : \u03c8 foc \u2227 \u00ac rev k R \u2206 | k \u0393 \u21d2 P 1 k P 2 : (\u03c6, \u03c8) foc \u2227 \u00ac rev \u2206 \u21d2 P : \u03c6 foc \u2227 \u00ac rev \u0393 \u21d2 Q: \u03c8 \u00ac foc k R \u2206 | k \u0393 \u21d2 P k Q : (\u03c6, \u03c8) foc \u2227 \u00ac rev \u2206 \u21d2 Q: \u03c6 \u00ac foc \u0393 \u21d2 P : \u03c8 foc \u2227 \u00ac rev k R \u2206 | k \u0393 \u21d2 Q k P : (\u03c6, \u03c8) foc \u2227 \u00ac rev \u2206 \u21d2 Q 1 : \u03c6 \u00ac foc \u0393 \u21d2 Q 2 : \u03c8 \u00ac foc k R \u2206 | k \u0393 \u21d2 Q 1 k Q 2 : (\u03c6, \u03c8) foc \u2227 \u00ac rev JR 1 \u21d2 J : 0 foc \u2227 \u00ac rev",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Completeness of Focalization for DA",
"sec_num": "6."
},
{
"text": "Right irreversible discontinuous multiplicative rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 10",
"sec_num": null
},
{
"text": "\u0393 \u21d2 P : \u03c6 foc \u2227 \u00ac rev \u2295R 1 \u0393 \u21d2 P\u2295B : \u03b9 1 \u03c6 foc \u2227 \u00ac rev \u0393 \u21d2 Q: \u03c6 \u00ac foc \u2295R 1 \u0393 \u21d2 Q\u2295B : \u03b9 1 \u03c6 foc \u2227 \u00ac rev \u0393 \u21d2 P : \u03c8 foc \u2227 \u00ac rev \u2295R 2 \u0393 \u21d2 A\u2295P : \u03b9 2 \u03c8 foc \u2227 \u00ac rev \u0393 \u21d2 Q: \u03c8 \u00ac foc \u2295R 2 \u0393 \u21d2 A\u2295Q : \u03b9 2 \u03c8 foc \u2227 \u00ac rev",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 10",
"sec_num": null
},
{
"text": "Right irreversible additive rules. The top level call to determine whether a sequent S is provable is prove(S). The routine prove(S) calls the routine prove rev lst with actual parameter the unitary list [S] . The routine prove rev lst then applies reversible rules to its list of sequents Ls in a don't care nondeterministic manner until none of the sequents contain any reversible type (i.e., it closes Ls under reversible rules). Then prove irrev lst is called on the list of sequents. This calls prove irrev(S ) for focusings S of each sequent, and if some focusing of each sequent is provable the result true is returned, otherwise false is returned. The procedure prove irrev applies focusing rules and recurses back on prove rev lst and prove irrev lst to determine provability for the given focusing.",
"cite_spans": [
{
"start": 204,
"end": 207,
"text": "[S]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": "\u2206 \u2212 \u2192 Q \u21d2 A foc \u2206 \u2212 \u2192 Q \u21d2 A \u2206 \u21d2 P foc \u2206 \u21d2 P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": "In order to prove completeness of (strong) focalization we invoke also an intermediate weakly focalized system. In all we shall be dealing with three systems: the displacement calculus with additives DA with sequents notated \u2206 \u21d2 A, the weakly focalized displacement calculus with additives DA foc with sequents notated \u2206=\u21d2 w A, and the strongly focalized displacement calculus with additives DA Foc with sequents notated \u2206=\u21d2A. Sequents of both DA foc and DA Foc may contain at most one focalized formula. When a DA foc sequent is notated \u2206=\u21d2 w A3focalized, it means that the sequent possibly contains a (unique) focalized formula. Otherwise, \u2206=\u21d2 w A means that the sequent does not contain a focus. In DA foc Constraint (33) is not imposed. Thus, whereas strong focalization imposes maximal alternating phases of reversible and irreversible rule application, weak focalization does not impose this maximality. In this section we prove the strong focalization property for the displacement calculus with additives DA, that is, strong focalization is complete.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": "The focalization property for Linear Logic was defined by Andreoli (1992) . In this article we follow the proof idea from Laurent (2004) , which we adapt to the intuitionistic non-commutative case DA with twin multiplicative modes of combination, the continuous (concatenation), and the discontinuous (intercalation) products. The proof relies heavily on the Cut-elimination property for weakly focalized DA, which is proved in Appendix A. In our presentation of focalization we have avoided the react rules of Andreoli (1992) and Chaudhuri (2006) , and use instead our simpler box notation suitable for non-commutativity.",
"cite_spans": [
{
"start": 58,
"end": 73,
"text": "Andreoli (1992)",
"ref_id": "BIBREF1"
},
{
"start": 122,
"end": 136,
"text": "Laurent (2004)",
"ref_id": "BIBREF24"
},
{
"start": 511,
"end": 526,
"text": "Andreoli (1992)",
"ref_id": "BIBREF1"
},
{
"start": 531,
"end": 547,
"text": "Chaudhuri (2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": "DA Foc is a subsystem of DA foc . DA foc has the focusing rules foc and Cut rules p-Cut 1 , p-Cut 2 , n-Cut 1 , and n-Cut 2 9 shown in Equation 34, and the reversible and irreversible rules displayed in the figures, which are read as allowing in irreversible rules the occurrence of reversible types, and in reversible rules as allowing arbitrary sequents possibly with a focalized type; when 3focalized appears in both premise and conclusion, it means that they are either both focused or both unfocused:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": "(34) \u2206 \u2212 \u2192 Q =\u21d2 w A foc \u2206 \u2212 \u2192 Q =\u21d2 w A \u2206=\u21d2 w P foc \u2206=\u21d2 w P \u0393=\u21d2 w P \u2206 \u2212 \u2192 P =\u21d2 w C 3focalized p-Cut 1 \u2206 \u0393 =\u21d2 w C 3focalized \u0393=\u21d2 w Q 3focalized \u2206 \u2212 \u2192 Q =\u21d2 w C p-Cut 2 \u2206 \u0393 =\u21d2 w C 3focalized \u0393=\u21d2 w P 3focalized \u2206 \u2212 \u2192 P =\u21d2 w C n-Cut 1 \u2206 \u0393 =\u21d2 w C 3focalized \u0393=\u21d2 w Q \u2206 \u2212 \u2192 Q =\u21d2 w C 3focalized n-Cut 2 \u2206 \u0393 =\u21d2 w C 3focalized",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": "With respect to bias assignment, consider the provable sequent n, n\\s, s\\q \u21d2 q. The biases n \u2208 At \u2022 and q, and s \u2208 At \u2022 mean that we have the identity axioms n \u21d2 n , q \u21d2 q, and ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": "s \u21d2 s. n \u21d2 n * s , s\\q \u21d2 q \\L n, n\\s , s\\q \u21d2 q foc n, n\\s, s\\q \u21d2 q n \u21d2 n s \u21d2 s \\L n, n\\s \u21d2 s foc n, n\\s \u21d2 s q \u21d2 q \\L n,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": "n \u21d2 n s \u21d2 s q \u21d2 q foc q \u21d2 q \\L s, s\\q \u21d2 q foc s, s\\q \u21d2 q \\L n, n\\s , s\\q, \u21d2 q foc n, n\\s, s\\q \u21d2 q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": "The identity axiom Id we consider for DA and for both DA foc and DA Foc is restricted to atomic types; recalling that atomic types are classified into irreversible bias At \u2022 and reversible bias At \u2022 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding of DA into DA foc",
"sec_num": "6.1"
},
{
"text": "(35) If p \u2208 At \u2022 , p=\u21d2 w p and p=\u21d2 p If q \u2208 At \u2022 , q =\u21d2 w q and q =\u21d2q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding of DA into DA foc",
"sec_num": "6.1"
},
{
"text": "In fact, the identity axiom holds of any type A. The generalized case is called the identity rule Id. It has the following formulation in the sequent calculi considered here:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding of DA into DA foc",
"sec_num": "6.1"
},
{
"text": "(36) \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \u2212 \u2192 A \u21d2 A in DA \u2212 \u2192 P =\u21d2 w P \u2212 \u2192 Q =\u21d2 w Q in DA foc \u2212 \u2192 P =\u21d2P \u2212 \u2192 Q =\u21d2Q in DA Foc",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding of DA into DA foc",
"sec_num": "6.1"
},
{
"text": "The identity rule Id, which applies not just to atomic types (like Id), but to all types is easy to prove in both DA and DA foc , but the same is not the case for DA Foc . This is the reason to consider what we have called weak focalization, which helps us to prove smoothly this crucial property for the proof of strong focalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding of DA into DA foc",
"sec_num": "6.1"
},
{
"text": "Theorem 1 (Embedding of DA into DA foc ) For any configuration \u2206 and type A, we have that if \u2206 \u21d2 A then \u2206=\u21d2 w A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding of DA into DA foc",
"sec_num": "6.1"
},
{
"text": "Proof. We proceed by induction on the length of the derivation of DA proofs. In the following lines, we apply the induction hypothesis (i.h.) for each premise of DA rules (with the exception of the identity rule and the right rules of units):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding of DA into DA foc",
"sec_num": "6.1"
},
{
"text": "Identity axiom, where p and q denote, respectively, positive and negative atoms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "\u2212 \u2192 p =\u21d2 w p foc \u2212 \u2192 p =\u21d2 w p \u2212 \u2192 q =\u21d2 w q foc \u2212 \u2192 q =\u21d2 w q \u2212",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "Cut rule: just apply n-Cut.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "\u2212 Units IR \u039b \u21d2 I ; IR \u039b=\u21d2 w I foc \u039b=\u21d2 w I JR 1 \u21d2 J ; JR 1=\u21d2 w J foc 1=\u21d2 w J",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "Left unit rules apply as in the case of DA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "Left discontinuous product: Directly translates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "Right discontinuous product: There are cases P 1 k P 2 , Q 1 k Q 2 , Q k P, and P k Q. We show one representative example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "\u2206 \u21d2 P \u0393 \u21d2 Q k R \u2206 | k \u0393 \u21d2 P k Q ; \u0393=\u21d2 w Q \u2206=\u21d2 w P Id \u2212 \u2192 P =\u21d2 w P Id \u2212 \u2192 Q =\u21d2 w Q foc \u2212 \u2192 Q =\u21d2 w Q k R \u2212 \u2192 P | k \u2212 \u2192 Q \u21d2 P k Q foc \u2212 \u2192 P | k \u2212 \u2192 Q =\u21d2 w P k Q n-Cut 1 \u2206 | k \u2212 \u2192 Q =\u21d2 w P k Q n-Cut 2 \u2206 | k \u0393=\u21d2 w P k Q \u2212",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "Left discontinuous \u2191 k rule (the left rule for \u2193 k is entirely similar): As in the case for the right discontinuous product k rule, we only show one representative example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "\u0393 \u21d2 P \u2206 \u2212 \u2192 Q \u21d2 A \u2191 k L \u2206 \u2212 \u2212\u2212 \u2192 Q\u2191 k P | k \u0393 \u21d2 A ; \u0393=\u21d2 w P Id \u2212 \u2192 P =\u21d2 w P Id \u2212 \u2192 Q =\u21d2 w Q \u2191 k L \u2212 \u2212\u2212\u2212 \u2192 Q\u2191 k P | k \u2212 \u2192 P =\u21d2 w Q foc \u2212 \u2212\u2212 \u2192 Q\u2191 k P | k \u2212 \u2192 P =\u21d2 w Q \u2206 \u2212 \u2192 Q =\u21d2 w A n-Cut 2 \u2206 \u2212 \u2212\u2212 \u2192 Q\u2191 k P | k \u2212 \u2192 P =\u21d2 w A n-Cut 1 \u2206 \u2212 \u2212\u2212 \u2192 Q\u2191 k P | k \u0393 =\u21d2 w A \u2212",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "Right discontinuous \u2191 k rule (the right discontinuous rule for \u2193 k is entirely similar):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "\u2206 | k \u2212 \u2192 A \u21d2 B \u2191 k R \u2206 \u21d2 B\u2191 k A ; \u2206 | k \u2212 \u2192 A =\u21d2 w B \u2191 k R \u2206=\u21d2 w B\u2191 k A \u2212",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "Product and implicative continuous rules: These follow the same pattern as the discontinuous case. We interchange the metalinguistic k-th intercalation | k with the metasyntactic concatenation ',', and we interchange k , \u2191 k , and \u2193 k with \u2022, /, and \\, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "Concerning additives, conjunction right translates directly and we consider then conjunction left (disjunction is symmetric):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "\u2206 \u2212 \u2192 P \u21d2 C &L1 \u2206 \u2212\u2212\u2192 P&Q \u21d2 C ; Id P=\u21d2 w P foc P=\u21d2 w P &L1 \u2212 \u2212\u2212\u2212 \u2192 P&Q =\u21d2 w P foc \u2212\u2212\u2192 P&Q=\u21d2 w P \u2206 \u2212 \u2192 P =\u21d2 w C n-Cut 1 \u2206 \u2212\u2212\u2192 P&Q =\u21d2 w C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "where by Id and application of the foc rule, we have \u2212\u2212\u2192 P&Q=\u21d2 w P.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "Theorem 2 Let S be a DA Foc sequent. The following holds:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding of DA foc into DA Foc",
"sec_num": "6.2"
},
{
"text": "If DA foc S then DA Foc S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding of DA foc into DA Foc",
"sec_num": "6.2"
},
{
"text": "Proof. We proceed by induction on the size of DA foc -provable S sequents, namely on |S|. 10 Since DA foc is Cut-free (see Appendix A) we consider Cut-free DA foc proofs of S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding of DA foc into DA Foc",
"sec_num": "6.2"
},
{
"text": "Case |S| = 0. This corresponds to the axiom case, which is the same for both calculi DA foc and DA Foc --see Equation (35).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "Suppose |S| > 0. Because |S| > 0, S does not correspond to an axiom. Hence S is derivable with at least one logical rule. Otherwise, the size of S would be equal to 0. The last rule can be either a logical rule or a foc rule (keep in mind we are considering Cut-free DA foc proofs!). We have two cases: a) If is logical, because S is supposed to belong to Seq(DA Foc ), it follows that its premises (possibly only one premise) belong also to Seq(DA Foc ). The premises have size strictly less than S. Therefore, we can safely apply i.h., whence we conclude.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "10 For a given type A, the size of A, |A|, is the number of connectives in A. By recursion on configurations we have:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "|\u039b| ::= 0 | \u2212 \u2192 A , \u2206| ::= |A| + |\u2206|, for s(A) = 0 |1, \u2206| ::= |\u2206| |A{\u2206 1 : \u2022 \u2022 \u2022 : \u2206 s(A) }| ::= |A| + s(A) i=1 |\u2206 i | Moreover, we have: |\u2206 \u2212 \u2212 \u2192 Q =\u21d2 w A| = |\u2206 \u2212 \u2192 Q =\u21d2 w A| |\u2206=\u21d2 w P | = |\u2206=\u21d2 w P| b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "Suppose is the foc rule, that is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "S foc S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "Observe that |S | = |S|, because the size | \u2022 | function counts connectives, and not boxes of focalized type-occurrences. Because |S | > 0, it follows that in the proof of S there must be at least one logical rule. S can have the following judgments:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "S foc \u2227 \u00ac rev S foc \u2227 rev",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "We consider the two cases:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "\u2212 S foc \u2227 \u00ac rev.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "Because S is focalized and there is no reversible type occurrence, the last rule of S must correspond either to a multiplicative or additive irreversible rule. The premises (possibly only one premise) have size strictly less than S. We can then safely apply the induction hypothesis (i.h.), which gives us DA Foc provable premises to which we can apply the rule. Whence DA Foc S , and hence DA Foc S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "\u2212 S foc \u2227 rev.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "The size of S equals that of the end-sequent S, and moreover the premise is foc \u2227 rev, which does not belong to Seq(DA Foc ). Clearly, we cannot apply i.h.. What can we do?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "Thanks to the flexibility of DA foc we can overcome this difficulty. S contains at least one reversible formula. We see three cases that are illustrative of the situation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "(37) a) \u2206 \u2212 \u2212\u2212 \u2192 A k B =\u21d2 w P b) \u2206 \u2212 \u2192 Q =\u21d2 w B\u2191 k A c) \u2206 \u2212 \u2192 Q =\u21d2 w A&B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "We consider these in turn:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "a) We have by Id that \u2212 \u2212\u2212 \u2192 A k B=\u21d2 w \u2212\u2212\u2212\u2212\u2192 A k B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": ". We apply to this sequent the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "reversible k left rule, whence \u2212 \u2192 A | k \u2212 \u2192 B =\u21d2 w \u2212\u2212\u2212\u2212\u2192 A k B .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "In case (37a), we have the following proof in DA foc :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "(38) \u2212 \u2192 A | k \u2212 \u2192 B =\u21d2 w \u2212\u2212\u2212\u2212\u2192 A k B \u2206 \u2212 \u2212\u2212 \u2192 A k B =\u21d2 w P p-Cut 1 \u2206 \u2212 \u2192 A | k \u2212 \u2192 B =\u21d2 w P foc \u2206 \u2212 \u2192 A | k \u2212 \u2192 B =\u21d2 w P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "To this DA foc proof we apply Cut-elimination and we get the Cut-free",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "DA foc end-sequent \u2206 \u2212 \u2192 A | k \u2212 \u2192 B =\u21d2 w P. We have |\u2206 \u2212 \u2192 A | k \u2212 \u2192 B =\u21d2 w P| < |\u2206 \u2212 \u2212\u2212 \u2192 A k B =\u21d2 w P|.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "We can apply then i.h. and we derive the provable",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "DA Foc sequent \u2206 \u2212 \u2192 A | k \u2212 \u2192 B =\u21d2P,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "to which we can apply the left k rule. We have obtained \u2206 \u2212 \u2212\u2212 \u2192 A k B =\u21d2P.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212",
"sec_num": null
},
{
"text": "In the same way, we have that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "b)",
"sec_num": null
},
{
"text": "\u2212 \u2212\u2212\u2212 \u2192 B\u2191 k A | k \u2212 \u2192 A =\u21d2 w B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "b)",
"sec_num": null
},
{
"text": "Thus, in case (37b), we have the following proof in DA foc :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "b)",
"sec_num": null
},
{
"text": "(39) \u2206 \u2212 \u2192 Q =\u21d2 w B\u2191 k A \u2212 \u2212\u2212\u2212 \u2192 B\u2191 k A | k \u2212 \u2192 A =\u21d2 w B p-Cut 2 \u2206 \u2212 \u2192 Q | k \u2212 \u2192 A =\u21d2 w B foc \u2206 \u2212 \u2192 Q | k \u2212 \u2192 A =\u21d2 w B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "b)",
"sec_num": null
},
{
"text": "As before, we apply Cut-elimination to the above proof. We get the Cut-free",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "b)",
"sec_num": null
},
{
"text": "DA foc end-sequent \u2206 \u2212 \u2192 Q | k \u2212 \u2192 A =\u21d2 w B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "b)",
"sec_num": null
},
{
"text": "It has size less than |\u2206 \u2212 \u2192 Q =\u21d2 w B\u2191 k A|. We can apply i.h. and we obtain the DA Foc provable sequent \u2206 \u2212 \u2192 Q | k \u2212 \u2192 A =\u21d2B, to which we apply the \u2191 k right rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "b)",
"sec_num": null
},
{
"text": "We have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "c)",
"sec_num": null
},
{
"text": "(40) \u2206 \u2212 \u2192 Q =\u21d2 w A&B foc \u2206 \u2212 \u2192 Q =\u21d2 w A&B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "c)",
"sec_num": null
},
{
"text": "by applying the foc rule and the invertibility of &R we obtain the provable DA foc sequents \u2206 \u2212 \u2192 Q =\u21d2 w A and \u2206 \u2212 \u2192 Q =\u21d2 w B. These sequents have smaller size than \u2206 \u2212 \u2192 Q =\u21d2 w A&B. The aforementioned sequents have a Cut-free proof in DA foc . We apply i.h. and we obtain \u2206 \u2212 \u2192 Q =\u21d2A and \u2206 \u2212 \u2192 Q =\u21d2B. We apply the & right rule in DA Foc , and we obtain \u2206 \u2212 \u2192 Q =\u21d2A&B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "c)",
"sec_num": null
},
{
"text": "Let Prov \u03a3 (C) be the class of \u03a3 sequents provable in calculus C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "c)",
"sec_num": null
},
{
"text": "Strong focalization is complete.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theorem 3",
"sec_num": null
},
{
"text": "Proof. Observe that in particular Prov Seq(DA) (DA Foc ) = Prov Seq(DA) (DA foc ). Because, by Theorem 1 Prov Seq(DA) (DA foc ) = Prov(DA), we have that Prov Seq(DA) (DA Foc ) = Prov(DA).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theorem 3",
"sec_num": null
},
{
"text": "CatLog version f1.2, CatLog1 (Morrill 2012) , is a parser/theorem-prover using uniform proof and count-invariance for multiplicatives. CatLog version k2, CatLog3 (Morrill 2017) , is a parser/theorem-prover using focusing, as expounded here, and count-invariance for multiplicatives, additives, brackets, and exponentials (Kuznetsov, Morrill, and Valent\u00edn 2017) . To evaluate the performance of uniform proof and focusing, we created a system version clock3f1.2 made by substituting the theoremproving engine of CatLog1 into the theorem-proving environment of CatLog3 so that count-invariance and other factors were kept constant while the uniform and focusing theorem-proving engines were compared. We performed the Montague test (Morrill and Valent\u00edn 2016) on the two systems, that is, the task of providing a computational grammar of the Montague fragment. In particular, we ran exhaustive parsing of the minicorpus given in Figure 13 . The str(dwp('(7-7)'), [b([john] ), walks], s(f)). str(dwp('(7-16)'), [b([every, man] ), talks], s(f)). str(dwp('(7-19)'), [b([the, fish] ), walks], s(f)). str(dwp('(7-32)'), [b([every, man] ), b ([b([walks, or, talks] )])], s(f)). str(dwp('(7-34)'), [b([b([b([every, man] ), walks, or, b([every, man]), talks])])], s(f)). str(dwp('(7-39)'), [b([b([b([a, woman] ), walks, and, b([she]), talks])])], s(f)). str (dwp('(7-43, 45 )'), [b([john] ), believes, that, b([a, fish]), walks], s(f)). str (dwp('(7-48, 49, 52 )'), [b([every, man] ), believes, that, b([a, fish]), walks], s(f)). str(dwp('(7-57)'), [b([every, fish, such, that, b([it] ), walks]), talks], s(f)). str(dwp('(7-60, 62)'), [b([john] ), seeks, a, unicorn], s(f)). str(dwp('(7-73)'), [b([john] ), is, bill], s(f)). str(dwp('(7-76)'), [b([john] ), is, a, man], s(f)). str(dwp('(7-83)'), [necessarily, b([john] ), walks], s(f)). str(dwp('(7-86)'), [b([john] ), walks, slowly], s(f)). str(dwp('(7-91)'), [b([john] ), tries, to, walk], s(f)). str(dwp('(7-98)'), [b([john] ), finds, a, unicorn], s(f)). str(dwp('(7-105)'), [b([every, man, such, that, b([he] ), loves, a, woman]), loses, her], s(f)). str(dwp('(7-110)'), [b([john] ), walks, in, a, park], s(f)). str (dwp('(7-116, 118 )'), [b([every, man] ), doesnt, walk], s(f)).",
"cite_spans": [
{
"start": 29,
"end": 43,
"text": "(Morrill 2012)",
"ref_id": "BIBREF31"
},
{
"start": 162,
"end": 176,
"text": "(Morrill 2017)",
"ref_id": "BIBREF32"
},
{
"start": 321,
"end": 360,
"text": "(Kuznetsov, Morrill, and Valent\u00edn 2017)",
"ref_id": "BIBREF19"
},
{
"start": 730,
"end": 757,
"text": "(Morrill and Valent\u00edn 2016)",
"ref_id": "BIBREF35"
},
{
"start": 961,
"end": 970,
"text": "[b([john]",
"ref_id": null
},
{
"start": 1008,
"end": 1023,
"text": "[b([every, man]",
"ref_id": null
},
{
"start": 1061,
"end": 1075,
"text": "[b([the, fish]",
"ref_id": null
},
{
"start": 1113,
"end": 1128,
"text": "[b([every, man]",
"ref_id": null
},
{
"start": 1134,
"end": 1156,
"text": "([b([walks, or, talks]",
"ref_id": null
},
{
"start": 1189,
"end": 1210,
"text": "[b([b([b([every, man]",
"ref_id": null
},
{
"start": 1280,
"end": 1299,
"text": "[b([b([b([a, woman]",
"ref_id": null
},
{
"start": 1348,
"end": 1363,
"text": "(dwp('(7-43, 45",
"ref_id": null
},
{
"start": 1369,
"end": 1378,
"text": "[b([john]",
"ref_id": null
},
{
"start": 1431,
"end": 1450,
"text": "(dwp('(7-48, 49, 52",
"ref_id": null
},
{
"start": 1456,
"end": 1471,
"text": "[b([every, man]",
"ref_id": null
},
{
"start": 1539,
"end": 1574,
"text": "[b([every, fish, such, that, b([it]",
"ref_id": null
},
{
"start": 1625,
"end": 1634,
"text": "[b([john]",
"ref_id": null
},
{
"start": 1684,
"end": 1693,
"text": "[b([john]",
"ref_id": null
},
{
"start": 1734,
"end": 1743,
"text": "[b([john]",
"ref_id": null
},
{
"start": 1786,
"end": 1808,
"text": "[necessarily, b([john]",
"ref_id": null
},
{
"start": 1846,
"end": 1855,
"text": "[b([john]",
"ref_id": null
},
{
"start": 1901,
"end": 1910,
"text": "[b([john]",
"ref_id": null
},
{
"start": 1958,
"end": 1967,
"text": "[b([john]",
"ref_id": null
},
{
"start": 2018,
"end": 2052,
"text": "[b([every, man, such, that, b([he]",
"ref_id": null
},
{
"start": 2115,
"end": 2124,
"text": "[b([john]",
"ref_id": null
},
{
"start": 2160,
"end": 2177,
"text": "(dwp('(7-116, 118",
"ref_id": null
},
{
"start": 2183,
"end": 2198,
"text": "[b([every, man]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 927,
"end": 936,
"text": "Figure 13",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation and Exemplification",
"sec_num": "7."
},
{
"text": "Montague test minicorpus for evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "Montague lexicon was as in Figure 14 . The tests were executed in XGP Prolog on a MacBook Air. The running times in seconds for exhaustive parsing were as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 36,
"text": "Figure 14",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "(41) clock3f1.2 (uniform) 36 k2 (focusing) 17",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "The tests for the minicorpus show that the focusing parsing/proof-search runs in half the time of the uniform parsing/proof-search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "We could have also made comparison with proof net parsing/theorem proving (Moot 2016) , but our proposal includes additives for which proof nets are still an open question, and not just the displacement calculus multiplicatives, and the point of focalization is that it is a general methodology extendible to, for example, exponentials and other modalities also, for which proof nets are also still under investigation. That is, our focalization approach is scaleable in a way that proof nets currently are not, and in this sense comparison with proof nets is not quite appropriate.",
"cite_spans": [
{
"start": 74,
"end": 85,
"text": "(Moot 2016)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "Here we illustrate propagation of focused types in derivations with output of CatLog3. Note that in the derivations produced by CatLog the application of focusing rules is hidden, and derivations can terminate at an unfocused identity axiom (which is derivable by focusing and applying a focused identity axiom). Atomic types are given a reversible bias uniformly. We give an example of quantificational ambiguity involving discontinuous types, and examples of polymorphism of the copula including coordination of unlike types involving additive types. The lexicon is as follows: There is the wide-scope object derivation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "((\u02c7catch A) B) doesnt : \u2200g\u2200a((Sg \u2191 (( Na\\Sf )/( Na\\Sb))) \u2193 Sg) : \u03bbA\u00ac(A \u03bbB\u03bbC(B C)) eat : (( \u2203aNa\\Sb)/\u2203aNa) :\u02c6\u03bbA\u03bbB((\u02c7eat A) B) every : \u2200g(\u2200f ((Sf \u2191 Nt(s(g))) \u2193 Sf )/CNs(g)) : \u03bbA\u03bbB\u2200C[(A C) \u2192 (B C)] finds : (( \u2203gNt(s(g))\\Sf )/\u2203aNa) :\u02c6\u03bbA\u03bbB(Pres ((\u02c7find A) B)) fish : CNs(n) : fish he : [] \u22121 \u2200g(( Sg| Nt(s(m)))/( Nt(s(m))\\Sg)) : \u03bbAA her : \u2200g\u2200a((( Na\\Sg) \u2191 Nt(s(f ))) \u2193 ( ( Na\\Sg)| Nt(s(f )))) : \u03bbAA in : (\u2200a\u2200f (( Na\\Sf )\\( Na\\Sf ))/\u2203aNa) :\u02c6\u03bbA\u03bbB\u03bbC((\u02c7in A) (B C)) is : (( \u2203gNt(s(g))\\Sf )/(\u2203aNa\u2295(\u2203g((CNg/CNg) (CNg\\CNg))\u2212I))) : \u03bbA\u03bbB(Pres (A \u2192 C.[B = C]; D.((D \u03bbE[E = B]) B))) it : \u2200f \u2200a((( Na\\Sf ) \u2191 Nt(s(n))) \u2193 ( ( Na\\Sf )| Nt(s(n)))) : \u03bbAA it : [] \u22121 \u2200f (( Sf | Nt(s(n)))/( Nt(s(n))\\Sf )) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "\u2200f (( ?Sf \\[] \u22121 [] \u22121 Sf )/ Sf ) : (\u03a6 n+ 0 or) or : \u2200a\u2200f (( ?( Na\\Sf )\\[] \u22121 [] \u22121 ( Na\\Sf ))/ ( Na\\Sf )) : (\u03a6 n+ (s 0) or) or : \u2200f (( ?(Sf/( \u2203gNt(s(g))\\Sf ))\\[] \u22121 [] \u22121 (Sf/( \u2203gNt(s(g))\\Sf )))/ (Sf/( \u2203gNt(s(g))\\Sf ))) : (\u03a6 n+ (s 0) or) park : CNs(n) : park seeks : (( \u2203gNt(s(g))\\Sf )/ \u2200a\u2200f (((Na\\Sf )/\u2203bNb)\\(Na\\Sf ))) : \u03bbA\u03bbB((\u02c7tries\u02c6((\u02c7A\u02c7find) B)) B) she : [] \u22121 \u2200g(( Sg| Nt(s(f )))/( Nt(s(f ))\\Sg)) : \u03bbAA slowly : \u2200a\u2200f ( ( Na\\Sf )\\( Na\\Sf )) :\u02c6\u03bbA\u03bbB(\u02c7slowly\u02c6(\u02c7A\u02c7B)) such+that : \u2200n((CNn\\CNn)/(Sf | Nt(n))) : \u03bbA\u03bbB\u03bbC[(B C) \u2227 (A C)] talks : ( \u2203gNt(s(g))\\Sf ) :\u02c6\u03bbA(Pres (\u02c7talk A)) that : (CPthat/ Sf ) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "Nt(s(A)) \u21d2 Nt(s(A)) Nt(s(A)) \u21d2 Nt(s(A)) Sf \u21d2 Sf \\L Nt(s(A)), Nt(s(A))\\Sf \u21d2 Sf /L Nt(s(A)), (Nt(s(A))\\Sf )/Nt(s(B)) , Nt(s(B)) \u21d2 Sf \u2191R 1, (Nt(s(A))\\Sf )/Nt(s(B)), Nt(s(B)) \u21d2 Sf \u2191 Nt(s(A)) Sf \u21d2 Sf \u2193L (Sf \u2191 Nt(s(A))) \u2193 Sf , (Nt(s(A))\\Sf )/Nt(s(B)), Nt(s(B)) \u21d2 Sf \u2191R (Sf \u2191 Nt(s(A))) \u2193 Sf, (Nt(s(A))\\Sf )/Nt(s(B)), 1 \u21d2 Sf \u2191 Nt(s(B)) Sf \u21d2 Sf \u2193L (Sf \u2191 Nt(s(A))) \u2193 Sf, (Nt(s(A))\\Sf )/Nt(s(B)), (Sf \u2191 Nt(s(B))) \u2193 Sf \u21d2 Sf \u2203B[(person B) \u2227 \u2200E[(person E) \u2192 ((love B) E)]]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "And the wide-scope subject derivation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "Nt(s(A)) \u21d2 Nt(s(A)) Nt(s(A)) \u21d2 Nt(s(A)) Sf \u21d2 Sf \\L Nt(s(A)), Nt(s(A))\\Sf \u21d2 Sf /L Nt(s(A)), (Nt(s(A))\\Sf )/Nt(s(B)) , Nt(s(B)) \u21d2 Sf \u2191R Nt(s(A)), (Nt(s(A))\\Sf )/Nt(s(B)), 1 \u21d2 Sf \u2191 Nt(s(B)) Sf \u21d2 Sf \u2193L Nt(s(A)), (Nt(s(A))\\Sf )/Nt(s(B)), (Sf \u2191 Nt(s(B))) \u2193 Sf \u21d2 Sf \u2191R 1, (Nt(s(A))\\Sf )/Nt(s(B)), (Sf \u2191 Nt(s(B))) \u2193 Sf \u21d2 Sf \u2191 Nt(s(A)) Sf \u21d2 Sf \u2193L (Sf \u2191 Nt(s(A))) \u2193 Sf , (Nt(s(A))\\Sf )/Nt(s(B)), (Sf \u2191 Nt(s(B))) \u2193 Sf \u21d2 Sf \u2200B[(person B) \u2192 \u2203E[(person E) \u2227 ((love E) B)]]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "The weak polymorphism of the copula with respect to nominal and adjectival complements is illustrated as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "(foc(2)) Tully+is+Cicero : Sf Nt(s(m)) : t, (Nt(s(A))\\Sf )/(NB\u2295(CNC/CNC)) : \u03bbD\u03bbE(D \u2192 F.[E = F]; G.((G \u03bbH[H = E]) E)), Nt(s(m)) : c \u21d2 Sf Nt(s(m)) \u21d2 Nt(s(m)) \u2295R Nt(s(m)) \u21d2 Nt(s(m))\u2295(CNA/CNA) Nt(s(m)) \u21d2 Nt(s(m)) Sf \u21d2 Sf \\L Nt(s(m)), Nt(s(m))\\Sf \u21d2 Sf /L Nt(s(m)), (Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)) , Nt(s(m)) \u21d2 Sf [t = c] (foc(3)) Tully+is+humanist : Sf Nt(s(m)) : t, (Nt(s(A))\\Sf )/(NB\u2295(CNC/CNC)) : \u03bbD\u03bbE(D \u2192 F.[E = F]; G.((G \u03bbH[H = E]) E)), CNI/CNI : \u03bbJ\u03bbK[(J K) \u2227 (humanist K)] \u21d2 Sf CNA \u21d2 CNA CNA \u21d2 CNA /L CNA/CNA , CNA \u21d2 CNA /R CNA/CNA \u21d2 CNA/CNA \u2295R CNA/CNA \u21d2 NB\u2295(CNA/CNA) Nt(s(m)) \u21d2 Nt(s(m)) Sf \u21d2 Sf \\L Nt(s(m)), Nt(s(m))\\Sf \u21d2 Sf /L Nt(s(m)), (Nt(s(m))\\Sf )/(NA\u2295(CNB/CNB)) , CNB/CNB \u21d2 Sf (humanist t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "Finally, the presence of such polymorphism in the coordination of unlike types (Bayer 1996; Morrill 1990 ) is illustrated by the following; the derivation is fragmented in two parts in Figure 15 .",
"cite_spans": [
{
"start": 79,
"end": 91,
"text": "(Bayer 1996;",
"ref_id": "BIBREF3"
},
{
"start": 92,
"end": 104,
"text": "Morrill 1990",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 185,
"end": 194,
"text": "Figure 15",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "(foc(4)) Tully+is+Cicero+and+humanist : Sf Nt(s(m)) : t, (Nt(s(A))\\Sf )/(NB\u2295(CNC/CNC)) : \u03bbD\u03bbE(D \u2192 F.[E = F]; G.((G \u03bbH[H = E]) E)), Nt(s(m)) : c, ((((Nt(s(I))\\Sf )/(NJ\u2295(CNK/CNK)))\\ (Nt(s(I))\\Sf ))\\(((Nt(s(I))\\Sf )/(NJ\u2295(CNK/CNK)))\\(Nt(s(I))\\Sf )))/ (((Nt(s(I))\\Sf )/(NJ\u2295(CNK/CNK)))\\(Nt(s(I))\\Sf )) : \u03bbL\u03bbM\u03bbN\u03bbO[((M N) O)\u2227 ((L N) O)], CNP/CNP : \u03bbQ\u03bbR[(Q R) \u2227 (humanist R)] \u21d2 Sf [[t = c] \u2227 (humanist t)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "We have claimed that just as the parse structures of context free grammar are ordered trees, the parse structures of categorial grammar are proof nets. Thus, just as we think of context free algorithms as finding ordered trees, bottom-up, top-down, and so forth, we can think of categorial algorithms as finding proof nets. The complication is that proof nets are much more sublime objects than ordered trees: They embody not only syntactic coherence but also semantic coherence. Focalization is a step on the way to eliminating spurious ambiguity by building such proof nets systematically. A further step on the way, eliminating all spurious ambiguity, would be multifocusing. This remains a topic for future study. Another topic for further study in focusing is the question of which assignments of bias are favorable for the processing of given lexica/corpora. Alternatively, for context free grammar one can perform chart parsing, or tabularization, and at least for the basic case of Lambek calculus suitable notions of proof net also support tabularization (Morrill 1996; de Groote 1999; Pentus 2010; Kanovich et al. 2017) . This also remains a topic for future study.",
"cite_spans": [
{
"start": 1064,
"end": 1078,
"text": "(Morrill 1996;",
"ref_id": "BIBREF29"
},
{
"start": 1079,
"end": 1094,
"text": "de Groote 1999;",
"ref_id": null
},
{
"start": 1095,
"end": 1107,
"text": "Pentus 2010;",
"ref_id": "BIBREF41"
},
{
"start": 1108,
"end": 1129,
"text": "Kanovich et al. 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8."
},
{
"text": "For the time being, however, we hope to have motivated the relevance of focalization to categorial parsing as deduction in relation to the DA categorial logic fragment, which leads naturally to the program of focalization of extensions of this fragment with connectives such as exponentials. (Nt(s(m) )\\Sf )/(Nt(s(m))\u2295(CNA/CNA)))\\(Nt(s(m))\\Sf ))\\(((Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)))\\ (Nt(s(m) We prove this by induction on the complexity (d, h) of topmost instances of Cut, where d is the size 11 of the cut formula and h is the length of the Cut-free derivation above the Cut rule. There are four cases to consider: Cut with axiom in the minor premise, Cut with axiom in the major premise, principal Cuts, and permutation conversions. In each case, the complexity of the Cut is reduced. In order to save space, we will not be exhaustively showing all the cases because many follow the same pattern. In particular, for any irreversible logical rule there are always four cases to consider corresponding to the polarity of the subformulas. In the following, we will show only one representative example. Concerning continuous and discontinuous formulas, we will show only the discontinuous cases (discontinuous connectives are less well-known than the continuous ones of the plain Lambek calculus). For the continuous instances, the reader has only to interchange the meta-linguistic wrap | k with the meta-syntactic concatenation , , k with \u2022, \u2191 k with / and \u2193 k with \\. The units cases (principal case and permutation conversion cases) are completely trivial.",
"cite_spans": [],
"ref_spans": [
{
"start": 292,
"end": 300,
"text": "(Nt(s(m)",
"ref_id": null
},
{
"start": 386,
"end": 394,
"text": "(Nt(s(m)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8."
},
{
"text": "Proof. Identity cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8."
},
{
"text": "P =\u21d2 w P \u2206 \u2212 \u2192 P =\u21d2 w B3focalized p-Cut 1 \u2206 \u2212 \u2192 P =\u21d2 w B3focalized ; \u2206 \u2212 \u2192 P =\u21d2 w B3focalized \u2206=\u21d2 w Q3focalized \u2212 \u2192 Q =\u21d2 w Q p-Cut 2 \u2206 \u2212 \u2192 Q =\u21d2 w B3focalized ; \u2206 \u2212 \u2192 Q =\u21d2 w B3focalized",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212 \u2192",
"sec_num": null
},
{
"text": "Principal cases:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212 \u2192",
"sec_num": null
},
{
"text": "\u2022 foc cases:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212 \u2192",
"sec_num": null
},
{
"text": "\u2206=\u21d2 w P foc \u2206=\u21d2 w P \u0393 \u2212 \u2192 P =\u21d2 w A3focalized n-Cut 1 \u0393 \u2206 =\u21d2 w A3focalized ; \u2206=\u21d2 w P \u0393 \u2212 \u2192 P =\u21d2 w A3focalized p-Cut 1 \u0393 \u2206 =\u21d2 w A3focalized \u2206=\u21d2 w Q \u2206 \u2212 \u2192 Q =\u21d2 w A foc \u0393 \u2212 \u2192 Q =\u21d2 w A n-Cut 2 \u0393 \u2206 =\u21d2 w A ; \u2206=\u21d2 w Q \u0393 \u2212 \u2192 Q =\u21d2 w A p-Cut 2 \u0393 \u2206 =\u21d2 w A \u2022 logical connectives: \u2206| k \u2212 \u2192 P 1 =\u21d2 w P 2 3focalized \u2191 k R \u2206=\u21d2 w P 2 \u2191 k P 1 3focalized \u0393 1 =\u21d2 w P 1 \u0393 2 \u2212 \u2192 P 2 =\u21d2 w A \u2191 k L \u0393 2 \u2212 \u2212\u2212\u2212\u2212 \u2192 P 2 \u2191 k P 1 | k \u0393 1 =\u21d2 w A p-Cut 2 \u0393 2 \u2206| k \u0393 1 =\u21d2 w A3focalized ; \u0393 1 =\u21d2 w P 1 \u2206| k \u2212 \u2192 P 1 =\u21d2 w P 2 3focalized \u0393 2 \u2212 \u2192 P 2 =\u21d2 w A n-Cut 1 \u0393 2 \u2206| k \u2212 \u2192 P 1 =\u21d2 w A3focalized p-Cut 1 \u0393 2 \u2206| k \u0393 1 =\u21d2 w A3focalized",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212 \u2192",
"sec_num": null
},
{
"text": "The case of \u2193 k is entirely similar to the \u2191 k case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212 \u2192",
"sec_num": null
},
{
"text": "\u2206 1 =\u21d2 w P \u2206 2 =\u21d2 w Q k R \u2206 1 | k \u2206 2 =\u21d2 w P k Q \u0393 \u2212 \u2192 P | k \u2212 \u2192 Q =\u21d2 w A3focalized k L \u0393 \u2212 \u2212\u2212 \u2192 P k Q =\u21d2 w A3focalized p-Cut 1 \u0393 \u2206 1 | k \u2206 2 =\u21d2 w A3focalized ; \u2206 2 =\u21d2 w Q \u2206 1 =\u21d2 w P \u0393 \u2212 \u2192 P | k \u2212 \u2192 Q =\u21d2 w A3focalized p-Cut 1 \u0393 \u2206 1 | k \u2212 \u2192 Q =\u21d2 w A3focalized n-Cut 2 \u0393 \u2206 1 | k \u2206 2 =\u21d2 w A3focalized \u2206=\u21d2 w Q3focalized \u2206=\u21d2 w P3focalized &R \u2206=\u21d2 w Q&P3focalized \u0393 \u2212 \u2192 Q =\u21d2 w B &L \u0393 \u2212 \u2212\u2212\u2212 \u2192 Q&P =\u21d2 w B p-Cut 2 \u0393 \u2206 =\u21d2 w B3focalized ; \u2206=\u21d2 w Q3focalized \u0393 \u2212 \u2192 Q =\u21d2 w B p-Cut 2 \u0393 \u2206 =\u21d2 w B3focalized \u2206=\u21d2 w P3focalized \u2206=\u21d2 w Q3focalized &R \u2206=\u21d2 w P&Q3focalized \u0393 \u2212 \u2192 P =\u21d2 w B &L \u0393 \u2212 \u2212\u2212\u2212 \u2192 P&Q =\u21d2 w B p-Cut 2 \u0393 \u2206 =\u21d2 w B3focalized ; \u2206=\u21d2 w P3focalized \u0393 \u2212 \u2192 P =\u21d2 w B n-Cut 1 \u0393 \u2206 =\u21d2 w B3focalized",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212 \u2192",
"sec_num": null
},
{
"text": "Left commutative p-Cut conversions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212 \u2192",
"sec_num": null
},
{
"text": "\u2206 \u2212 \u2192 Q =\u21d2 w Q 2 foc \u2206 \u2212 \u2192 Q =\u21d2 w Q 2 \u0393 \u2212 \u2212 \u2192 Q 2 =\u21d2 w C p-Cut 2 \u0393 \u2206 \u2212 \u2192 Q =\u21d2 w C ; \u2206 \u2212 \u2192 Q =\u21d2 w Q 2 \u0393 \u2212 \u2212 \u2192 Q 2 =\u21d2 w C p-Cut 2 \u0393 \u2206 \u2212 \u2192 Q =\u21d2 w C foc \u0393 \u2206 \u2212 \u2192 Q =\u21d2 w C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2212 \u2192",
"sec_num": null
},
{
"text": "The original Lambek calculus did not include the product unit and had a non-empty antecedent condition (\"Lambek's restriction\"). The displacement calculus used in the present article conservatively extends the Lambek calculus without Lambek's restriction and with product units.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Indeed,Sorokin (2013) andWijnholds (2014) show that Displacement calculus recognizes the class of well-nested multiple context free languages(Kanazawa 2009); but Combinatory Categorial Grammar recognizes only the class of tree adjoining languages(Joshi, Vijay-Shanker, and Weir 1991).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It is known that displacement calculus (without additives) generates a well-known class of mildly context free languages: the well-nested multiple context free languages(Sorokin 2013;Wijnholds 2014). At the time of writing, only this and other lower bounds are known; tight upper bounds on the weak generative capacity of displacement calculus constitute an open question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is because in Cut-free backward-chaining proof search for a given goal sequent a finite number of rules can be applied backwards in only a finite number of ways to generate subgoals at each step, and these subgoals have lower complexity (fewer connectives) than the goal matched; hence the proof search space is finite. 6 Other terms found in the literature are invertible, asynchronous, or negative. 7 Other terms found in the literature are non-invertible, synchronous, or positive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Unknown even to the inventor of Linear Logic, J.-Y. Girard.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "If it is convenient, we may drop the subscripts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The size of |A| is the number of connectives appearing in A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank anonymous Computational Linguistics referees for comments and suggestions that have improved this article. This research was supported by grant TIN2017-89244-R from MINECO (Ministerio de Economia, Industria y Competitividad).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Right commutative p-Cut conversions (unordered multiple distinguished occurrences are separated by semicolons):Left commutative n-Cut conversions:Right commutative n-Cut conversions:This completes the proof.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A. Cut Elimination in DA foc",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Cyclic multiplicative-additive proof nets of linear logic with an application to language parsing",
"authors": [
{
"first": "Vito",
"middle": [
"Michele"
],
"last": "Abrusci",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Maieli",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of Formal Grammar: 20th and 21st International Conferences, FG 2015",
"volume": "",
"issue": "",
"pages": "43--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abrusci, Vito Michele and Roberto Maieli. 2016. Cyclic multiplicative-additive proof nets of linear logic with an application to language parsing. In Proceedings of Formal Grammar: 20th and 21st International Conferences, FG 2015, pages 43-59, Bozen.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Logic programming with focusing in linear logic",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Andreoli",
"suffix": ""
}
],
"year": 1992,
"venue": "Journal of Logic and Computation",
"volume": "2",
"issue": "3",
"pages": "297--347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreoli, J. M. 1992. Logic programming with focusing in linear logic. Journal of Logic and Computation, 2(3):297-347.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On categorial and phrase structure grammars",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Bar-Hillel",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Gaifman",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shamir",
"suffix": ""
}
],
"year": 1960,
"venue": "Bulletin of the Research Council of Israel",
"volume": "9",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bar-Hillel, Y., C. Gaifman, and E. Shamir. 1960. On categorial and phrase structure grammars. Bulletin of the Research Council of Israel, 9F:1-16.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The coordination of unlike categories",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Bayer",
"suffix": ""
}
],
"year": 1996,
"venue": "Language",
"volume": "72",
"issue": "3",
"pages": "579--616",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bayer, Samuel. 1996. The coordination of unlike categories. Language, 72(3):579-616.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The Focused Inverse Method for Linear Logic",
"authors": [
{
"first": "Kaustuv",
"middle": [],
"last": "Chaudhuri",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chaudhuri, Kaustuv. 2006. The Focused Inverse Method for Linear Logic. Ph.D. thesis, Carnegie Mellon University, Pittsburgh, PA, USA. AAI3248489.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Canonical sequent proofs via multi-focusing",
"authors": [
{
"first": "Kaustuv",
"middle": [],
"last": "Chaudhuri",
"suffix": ""
},
{
"first": "Dale",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Saurin",
"suffix": ""
}
],
"year": 2008,
"venue": "Fifth IFIP International Conference on Theoretical Computer Science -TCS",
"volume": "",
"issue": "",
"pages": "383--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chaudhuri, Kaustuv, Dale Miller, and Alexis Saurin. 2008. Canonical sequent proofs via multi-focusing. In Fifth IFIP International Conference on Theoretical Computer Science -TCS 2008, IFIP 20th World Computer Congress, pages 383-396, Milano.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Elimination of spurious ambiguity in transition-based dependency parsing",
"authors": [
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen, Shay B., Carlos G\u00f3mez-Rodr\u00edguez, and Giorgio Satta. 2012. Elimination of spurious ambiguity in transition-based dependency parsing. CoRR, abs/1206.6735.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Efficient normal-form parsing for combinatory categorial grammar",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eisner, Jason. 1996. Efficient normal-form parsing for combinatory categorial grammar. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics, pages 79-86, Stroudsburg, PA. Fadda, Mario. 2010. Geometry of Grammar: Exercises in Lambek Style. Ph.D. thesis, Universitat Polit\u00e8cnica de Catalunya, Barcelona.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A dynamic programming approach to categorial deduction",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 1999,
"venue": "COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goldberg, Yoav and Joakim Nivre. 2012. A dynamic oracle for arc-eager dependency parsing. In COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, pages 959-976, Mumbai. de Groote, Philippe. 1999. A dynamic programming approach to categorial deduction. In Proceedings of the 16th International Conference on Automated Deduction (CADE), pages 1-15, Trento.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Efficient stacked dependency parsing by forest reranking",
"authors": [
{
"first": "Katsuhiko",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "Shuhei",
"middle": [],
"last": "Kondo",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2013,
"venue": "TACL",
"volume": "1",
"issue": "",
"pages": "139--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hayashi, Katsuhiko, Shuhei Kondo, and Yuji Matsumoto. 2013. Efficient stacked dependency parsing by forest reranking. TACL, 1:139-150.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Studied Flexibility. Categories and Types in Syntax and Semantics",
"authors": [
{
"first": "H",
"middle": [],
"last": "Hendriks",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hendriks, H. 1993. Studied Flexibility. Categories and Types in Syntax and Semantics. Ph.D. thesis, Universiteit van Amsterdam, ILLC, Amsterdam.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Normal form theorem proving for the Lambek calculus",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hepple",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "173--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hepple, Mark. 1990. Normal form theorem proving for the Lambek calculus. In Proceedings of COLING, pages 173-178, Stockholm.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Parsing and derivational equivalence",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hepple",
"suffix": ""
},
{
"first": "Glyn",
"middle": [],
"last": "Morrill",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the Fourth Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hepple, Mark and Glyn Morrill. 1989. Parsing and derivational equivalence. In Proceedings of the Fourth Conference of the European Chapter of the Association for Computational Linguistics, pages 10-18, Stroudsburg, PA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dealing with spurious ambiguity in learning ITG-based word alignment",
"authors": [
{
"first": "Shujian",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers",
"volume": "2",
"issue": "",
"pages": "379--383",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, Shujian, Stephan Vogel, and Jiajun Chen. 2011. Dealing with spurious ambiguity in learning ITG-based word alignment. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers -Volume 2, pages 379-383, Stroudsburg, PA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Proof nets for unit-free multiplicative-additive linear logic",
"authors": [
{
"first": "Dominic",
"middle": [
"J D"
],
"last": "Hughes",
"suffix": ""
},
{
"first": "Rob",
"middle": [
"J"
],
"last": "Van Glabbeck",
"suffix": ""
}
],
"year": 2005,
"venue": "ACM Transactions on Computational Logic (TOCL)",
"volume": "6",
"issue": "4",
"pages": "784--842",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hughes, Dominic J. D. and Rob J. van Glabbeck. 2005. Proof nets for unit-free multiplicative-additive linear logic. ACM Transactions on Computational Logic (TOCL), 6(4):784-842.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The convergence of mildly context-sensitive grammar formalisms",
"authors": [
{
"first": "A",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Vijay-Shanker",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Weir",
"suffix": ""
}
],
"year": 1991,
"venue": "Foundational Issues in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "31--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshi, A. K., K. Vijay-Shanker, and D. Weir. 1991. The convergence of mildly context-sensitive grammar formalisms. In P. Sells, S. M. Shieber, and T. Wasow, editors, Foundational Issues in Natural Language Processing, The MIT Press, Cambridge, MA, pages 31-81.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Lambek calculus enriched with additional connectives",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kanazawa",
"suffix": ""
}
],
"year": 1992,
"venue": "Journal of Logic, Language and Information",
"volume": "1",
"issue": "",
"pages": "141--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kanazawa, M. 1992. The Lambek calculus enriched with additional connectives. Journal of Logic, Language and Information, 1:141-171.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The pumping lemma for well-nested multiple context-free languages",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Kanazawa",
"suffix": ""
}
],
"year": 2009,
"venue": "Developments in Language Theory: 13th International Conference",
"volume": "",
"issue": "",
"pages": "312--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kanazawa, Makoto. 2009. The pumping lemma for well-nested multiple context-free languages. In Developments in Language Theory: 13th International Conference, DLT 2009, pages 312-325, Stuttgart.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A polynomial-time algorithm for the Lambek calculus with brackets of bounded order",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Kanovich",
"suffix": ""
},
{
"first": "Stepan",
"middle": [],
"last": "Kuznetsov",
"suffix": ""
},
{
"first": "Glyn",
"middle": [],
"last": "Morrill",
"suffix": ""
},
{
"first": "Andreu",
"middle": [],
"last": "Scedrov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd International Conference on Formal Structures for Computation and Deduction",
"volume": "17",
"issue": "",
"pages": "22--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kanovich, Max, Stepan Kuznetsov, Glyn Morrill, and Andreu Scedrov. 2017. A polynomial-time algorithm for the Lambek calculus with brackets of bounded order. In Proceedings of the 2nd International Conference on Formal Structures for Computation and Deduction (FSCD 2017), pages 22.1-22.17, Leibniz.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Count-invariance including exponentials",
"authors": [
{
"first": "Stepan",
"middle": [],
"last": "Kuznetsov",
"suffix": ""
},
{
"first": "Glyn",
"middle": [],
"last": "Morrill",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Valent\u00edn",
"suffix": ""
}
],
"year": 2017,
"venue": "15th Meeting on Mathematics of Language",
"volume": "",
"issue": "",
"pages": "128--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuznetsov, Stepan, Glyn Morrill, and Oriol Valent\u00edn. 2017. Count-invariance including exponentials. In 15th Meeting on Mathematics of Language, pages 128-139, London.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Proof nets for the Lambek calculus -an overview",
"authors": [
{
"first": "F",
"middle": [],
"last": "Lamarche",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Retor\u00e9",
"suffix": ""
}
],
"year": 1958,
"venue": "Proofs and Linguistic Categories -Applications of Logic to the Analysis and Implementation of Natural Language",
"volume": "65",
"issue": "",
"pages": "154--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lamarche, F. and C. Retor\u00e9. 1996. Proof nets for the Lambek calculus -an overview. In Proofs and Linguistic Categories -Applications of Logic to the Analysis and Implementation of Natural Language, pages 241-626, Bologna. Lambek, Joachim. 1958. The mathematics of sentence structure. American Mathematical Monthly, 65:154-170.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "On the calculus of syntactic types",
"authors": [
{
"first": "Joachim",
"middle": [],
"last": "Lambek",
"suffix": ""
}
],
"year": 1961,
"venue": "Structure of Language and its Mathematical Aspects, Proceedings of the Symposia in Applied Mathematics XII",
"volume": "",
"issue": "",
"pages": "166--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lambek, Joachim. 1961. On the calculus of syntactic types. In Roman Jakobson, editor, Structure of Language and its Mathematical Aspects, Proceedings of the Symposia in Applied Mathematics XII, pages 166-178, Providence, RI.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Categorial and categorical grammars",
"authors": [
{
"first": "Joachim",
"middle": [],
"last": "Lambek",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lambek, Joachim. 1988. Categorial and categorical grammars. In Richard T.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Categorial Grammars and Natural Language Structures",
"authors": [
{
"first": "Emmon",
"middle": [],
"last": "Oehrle",
"suffix": ""
},
{
"first": "Deidre",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wheeler",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "32",
"issue": "",
"pages": "297--317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oehrle, Emmon Bach, and Deidre Wheeler, editors, Categorial Grammars and Natural Language Structures, volume 32 of Studies in Linguistics and Philosophy, D. Reidel, Dordrecht, pages 297-317.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A proof of the focalization property of linear logic. Unpublished manuscript",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Laurent",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurent, Olivier. 2004. A proof of the focalization property of linear logic. Unpublished manuscript, CNRS - Universit\u00e9 Paris VII.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Uniform proofs as a foundation for logic programming",
"authors": [
{
"first": "D",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Nadathur",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pfenning",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Scedrov",
"suffix": ""
}
],
"year": 1991,
"venue": "Annals of Pure and Applied Logic",
"volume": "51",
"issue": "",
"pages": "125--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, D., G. Nadathur, F. Pfenning, and A. Scedrov. 1991. Uniform proofs as a foundation for logic programming. Annals of Pure and Applied Logic, 51:125-157.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Proof nets for the Lambek-Grishin calculus. CoRR, abs/1112.6384. Moot, Richard",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Moortgat",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Moot",
"suffix": ""
}
],
"year": 2011,
"venue": "Categories and Types in Logic, Language and Physics: Essays Dedicated to Jim Lambek on the Occasion of His 90th Birthday",
"volume": "8222",
"issue": "",
"pages": "297--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moortgat, Michael and Richard Moot. 2011. Proof nets for the Lambek-Grishin calculus. CoRR, abs/1112.6384. Moot, Richard. 2014. Extended Lambek calculi and first-order linear logic. In Casadio, Claudia, Bob Coeke, Michael Moortgat, and Philip Scott, editors, Categories and Types in Logic, Language and Physics: Essays Dedicated to Jim Lambek on the Occasion of His 90th Birthday, volume 8222 of LNCS, FoLLI Publications in Logic, Language and Information, Springer, Berlin, pages 297-330.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "and Christian Retor\u00e9. 2012. The Logic of Categorial Grammars: A Deductive Account of Natural Language Syntax and Semantics",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Moot",
"suffix": ""
}
],
"year": 2016,
"venue": "Formal Grammar: 20th and 21st International Conferences, FG 2015",
"volume": "",
"issue": "",
"pages": "273--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moot, Richard. 2016. Proof nets for the displacement calculus. In Annie Foret, Glyn Morrill, Reinhard Muskens, Rainer Osswald, and Sylvain Pogodalla, editors, Formal Grammar: 20th and 21st International Conferences, FG 2015, pages 273-289, Bozen. Moot, Richard and Christian Retor\u00e9. 2012. The Logic of Categorial Grammars: A Deductive Account of Natural Language Syntax and Semantics. Springer, Heidelberg.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Grammar and Logical Types",
"authors": [
{
"first": "Glyn",
"middle": [],
"last": "Morrill",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the Seventh Colloquium",
"volume": "",
"issue": "",
"pages": "429--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morrill, Glyn. 1990. Grammar and Logical Types. In Martin, Stockhof, and Leen Torenvliet, editors, Proceedings of the Seventh Colloquium, pages 429-450, Amsterdam.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Memoisation of categorial proof nets: Parallelism in categorial processing",
"authors": [
{
"first": "Glyn",
"middle": [],
"last": "Morrill",
"suffix": ""
}
],
"year": 1996,
"venue": "Proofs and Linguistic Categories, Proceedings 1996 Roma Workshop",
"volume": "",
"issue": "",
"pages": "157--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morrill, Glyn. 1996. Memoisation of categorial proof nets: Parallelism in categorial processing. In Proofs and Linguistic Categories, Proceedings 1996 Roma Workshop, pages 157-169, Bologna.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "CatLog: A categorial parser/theorem-prover",
"authors": [
{
"first": "Glyn",
"middle": [],
"last": "Morrill",
"suffix": ""
}
],
"year": 2012,
"venue": "LACL 2012 System Demonstrations",
"volume": "",
"issue": "",
"pages": "13--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morrill, Glyn. 2012. CatLog: A categorial parser/theorem-prover. In LACL 2012 System Demonstrations, pages 13-16, Nantes.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Parsing logical grammar: Catlog3",
"authors": [
{
"first": "Glyn",
"middle": [],
"last": "Morrill",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Workshop on Logic and Algorithms in Computational Linguistics 2017",
"volume": "",
"issue": "",
"pages": "107--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morrill, Glyn. 2017. Parsing logical grammar: Catlog3. In Proceedings of the Workshop on Logic and Algorithms in Computational Linguistics 2017, pages 107-131, Stockholm University.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Displacement calculus",
"authors": [
{
"first": "Glyn",
"middle": [],
"last": "Morrill",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Valent\u00edn",
"suffix": ""
}
],
"year": 2010,
"venue": "Special issue Festschrift for Joachim Lambek",
"volume": "36",
"issue": "",
"pages": "167--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morrill, Glyn and Oriol Valent\u00edn. 2010. Displacement calculus. Linguistic Analysis, 36(1-4):167-192. Special issue Festschrift for Joachim Lambek.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Multiplicative-additive focusing for parsing as deduction",
"authors": [
{
"first": "Glyn",
"middle": [],
"last": "Morrill",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Valent\u00edn",
"suffix": ""
}
],
"year": 2015,
"venue": "First International Workshop on Focusing, Workshop Affiliated with LPAR 2015",
"volume": "",
"issue": "",
"pages": "29--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morrill, Glyn and Oriol Valent\u00edn. 2015. Multiplicative-additive focusing for parsing as deduction. In First International Workshop on Focusing, Workshop Affiliated with LPAR 2015, pages 29-54, Suva.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Computational coverage of type logical grammar: The Montague test",
"authors": [
{
"first": "Glyn",
"middle": [],
"last": "Morrill",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Valent\u00edn",
"suffix": ""
}
],
"year": 2016,
"venue": "Empirical Issues in Syntax and Semantics",
"volume": "11",
"issue": "",
"pages": "141--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morrill, Glyn and Oriol Valent\u00edn. 2016. Computational coverage of type logical grammar: The Montague test. In C. Pi\u00f1\u00f3n, editor, Empirical Issues in Syntax and Semantics, volume 11. Colloque de Syntaxe et S\u00e9mantique\u0139 Paris (CSSP), Paris, pages 141-170.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "The displacement calculus",
"authors": [
{
"first": "Glyn",
"middle": [],
"last": "Morrill",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Valent\u00edn",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Fadda",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Logic, Language and Information",
"volume": "20",
"issue": "1",
"pages": "1--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morrill, Glyn, Oriol Valent\u00edn, and Mario Fadda. 2011. The displacement calculus. Journal of Logic, Language and Information, 20(1):1-48.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Categorial Grammar: Logical Syntax, Semantics, and Processing",
"authors": [
{
"first": "Glyn",
"middle": [
"V"
],
"last": "Morrill",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morrill, Glyn V. 2011. Categorial Grammar: Logical Syntax, Semantics, and Processing. Oxford University Press.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Lambek grammars are context-free",
"authors": [
{
"first": "M",
"middle": [],
"last": "Pentus",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings Eighth Annual IEEE Symposium on Logic in Computer Science. Montreal",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pentus, M. 1992. Lambek grammars are context-free. Technical report, Department of Mathematics and Logic, Steklov Math. Institute, Moskow Also published as ILLC Report, University of Amsterdam, 1993, and in Proceedings Eighth Annual IEEE Symposium on Logic in Computer Science. Montreal, 1993.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Free monoid completeness of the Lambek calculus allowing empty premises",
"authors": [
{
"first": "M",
"middle": [],
"last": "Pentus",
"suffix": ""
}
],
"year": 1998,
"venue": "Logic Colloquium '96",
"volume": "12",
"issue": "",
"pages": "171--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pentus, M. 1998. Free monoid completeness of the Lambek calculus allowing empty premises. In J. M. Larrazabal, D. Lascar, and Mints G., editors, Logic Colloquium '96, volume 12. Springer, pages 171-209.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Shortened version published as Language completeness of the Lambek calculus",
"authors": [
{
"first": "Mati",
"middle": [],
"last": "Pentus",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Ninth Annual IEEE Symposium on Logic in Computer Science",
"volume": "",
"issue": "",
"pages": "487--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pentus, Mati. 1993. Lambek calculus is L-complete, ILLC Report, University of Amsterdam. Shortened version published as Language completeness of the Lambek calculus, Proceedings of the Ninth Annual IEEE Symposium on Logic in Computer Science, pages 487-496, Paris.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "A polynomial-time algorithm for Lambek grammars of bounded order",
"authors": [
{
"first": "Mati",
"middle": [],
"last": "Pentus",
"suffix": ""
}
],
"year": 2010,
"venue": "Special issue Festschrift for Joachim Lambek",
"volume": "36",
"issue": "",
"pages": "441--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pentus, Mati. 2010. A polynomial-time algorithm for Lambek grammars of bounded order. Linguistic Analysis, 36(1-4):441-472. Special issue Festschrift for Joachim Lambek.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Substructural Logical Specifications",
"authors": [
{
"first": "Robert",
"middle": [
"J"
],
"last": "Simmons",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simmons, Robert J. 2012. Substructural Logical Specifications. Ph.D. thesis, Carnegie Mellon University, Pittsburgh.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Normal forms for multiple context-free languages and displacement Lambek grammars",
"authors": [
{
"first": "Alexey",
"middle": [],
"last": "Sorokin",
"suffix": ""
}
],
"year": 2013,
"venue": "Logical Foundations of Computer Science: International Symposium",
"volume": "",
"issue": "",
"pages": "319--334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sorokin, Alexey. 2013. Normal forms for multiple context-free languages and displacement Lambek grammars. In Sergei Artemov and Anil Nerode, editors, Logical Foundations of Computer Science: International Symposium, pages 319-334, San Diego, CA.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "The Syntactic Process",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steedman, Mark. 2000. The Syntactic Process, MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Models for the displacement calculus",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Valent\u00edn",
"suffix": ""
}
],
"year": 2016,
"venue": "Formal Grammar: 20th and 21st International Conferences",
"volume": "",
"issue": "",
"pages": "147--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valent\u00edn, Oriol. 2016. Models for the displacement calculus. In Formal Grammar: 20th and 21st International Conferences, pages 147-163, Bozen.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Conversions between D and MCFG: Logical characterizations of the mildly context-sensitive languages",
"authors": [
{
"first": "Gijs",
"middle": [],
"last": "Wijnholds",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jasper",
"suffix": ""
}
],
"year": 2014,
"venue": "Computational Linguistics in the Netherlands Journal",
"volume": "4",
"issue": "",
"pages": "137--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wijnholds, Gijs Jasper. 2014. Conversions between D and MCFG: Logical characterizations of the mildly context-sensitive languages. Computational Linguistics in the Netherlands Journal, 4:137-148.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Moortgat and Moot (2011) study versions of proof nets and focusing for so-called Lambek-Grishin calculus. Lambek-Grishin calculus is like Lambek calculus but includes a disjunctive multiplicative connective family as well as the conjunctive multiplicative connective family of the Lambek calculus L, and it is thus multiple-conclusioned. For the case",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "The solution: proof nets. But proof nets for categorial logic in general are not fully understood, for example, in the case of units and additive connectives.",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Left irreversible continuous multiplicative rules.",
"uris": null,
"num": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Left irreversible discontinuous multiplicative rules.",
"uris": null,
"num": null
},
"FIGREF4": {
"type_str": "figure",
"text": "Right irreversible continuous multiplicative rules.",
"uris": null,
"num": null
},
"FIGREF5": {
"type_str": "figure",
"text": "Foc rule for DA Foc . application. Pseudo-code for a recursive algorithm of strongly focalized proof search is as follows: bool. function prove(\u03a3: unfocused sequent); /* prove(\u03a3) returns true if the unfocused sequent \u03a3 is provable; otherwise it returns false. */ return prove rev lst([\u03a3]). bool. function prove rev lst(Ls: list of unfocused sequents); /* prove rev lst(Ls) returns true if the list of unfocused sequents Ls are provable; otherwise it returns false. */ while Ls contains a sequent \u03a3 with a reversible type do Ls := Ls with \u03a3 replaced by the premises of the rule for the reversible type; return prove rev lst(Ls). bool. function prove irrev lst(Ls: list of unfocused sequents); /* prove irrev lst(Ls) returns true if the list of unfocused sequents without reversible types Ls are provable; otherwise it returns false. */ var Success: bool; if Ls = [] then return true else where Ls = [\u03a3|T] do begin Success := true; for each irreversible type in \u03a3 do if Success then begin focus this irreversible type to \u03a3 ; Success := prove irrev(\u03a3 ); end; end; return Success and prove irrev lst(T).bool. function prove irrev(\u03a3: focused sequent); /* prove irrev(\u03a3) returns true if the focused sequent \u03a3 is provable; otherwise it returns false. */ var Success: bool; var Ls rev: list of unfocused sequents; var Ls irrev: list of focused sequents; Success := true; for each rule application to the focused type in \u03a3 do if Success then begin Ls rev := its reversible premises; Ls irrev := its irreversible premises; Success := prove rev lst(Ls rev) and prove irrev lst(Ls irrev); end; return Success.",
"uris": null,
"num": null
},
"FIGREF6": {
"type_str": "figure",
"text": "and : ((((Nt(s(A))\\Sf )/(NB\u2295(CNC/CNC)))\\(Nt(s(A))\\Sf ))\\(((Nt(s(A))\\Sf )/(NB\u2295(CNC/CNC)))\\ (Nt(s(A))\\Sf )))/(((Nt(s(A))\\Sf )/(NB\u2295(CNC/CNC)))\\(Nt(s(A))\\Sf )) :\u03bbD\u03bbE\u03bbF\u03bbG[((E F) G) \u2227 ((D F) G)] Cicero : Nt(s(m)) : c everyone : (SA \u2191 Nt(s(B))) \u2193 SA : \u03bbC\u2200D[(person D) \u2192 (C D)] humanist : CNA/CNA : \u03bbB\u03bbC[(B C) \u2227 (humanist C)] is : (Nt(s(A))\\Sf )/(NB\u2295(CNC/CNC)) : \u03bbD\u03bbE(D \u2192 F.[E = F]; G.((G \u03bbH[H = E]) E)) loves : (Nt(s(A))\\Sf )/NB : love someone : (SA \u2191 Nt(s(B))) \u2193 SA : \u03bbC\u2203D[(person D) \u2227 (C D)] Tully : Nt(s(m)) : tThe (classical) quantification example is:(foc(1)) everyone+loves+someone : Sf For this lexical lookup yields:(SA \u2191 Nt(s(B))) \u2193 SA : \u03bbC\u2200D[(person D) \u2192 (C D)], (Nt(s(E))\\Sf )/NF : love, (SG \u2191 Nt(s(H))) \u2193 SG : \u03bbI\u2203J[(person J) \u2227 (I J)] \u21d2 Sf a : \u2200g(\u2200f ((Sf \u2191 Nt(s(g))) \u2193 Sf )/CNs(g)) : \u03bbA\u03bbB\u2203C[(A C) \u2227 (B C)] and : \u2200f (( ?Sf \\[] \u22121 [] \u22121 Sf )/Sf ) : (\u03a6 n+ 0 and) and : \u2200a\u2200f (( ?( Na\\Sf )\\[] \u22121 [] \u22121 ( Na\\Sf ))/ ( Na\\Sf )) : (\u03a6 n+ (s 0) and) believes : (( \u2203gNt(s(g))\\Sf )/(CPthat Sf )) :\u02c6\u03bbA\u03bbB(Pres ((\u02c7believe A) B)) bill : Nt(s(m)) : b catch : (( \u2203aNa\\Sb)/\u2203aNa) :\u02c6\u03bbA\u03bbB",
"uris": null,
"num": null
},
"FIGREF7": {
"type_str": "figure",
"text": "\u03bbAA john : Nt(s(m)) : j loses : (( \u2203gNt(s(g))\\Sf )/\u2203aNa) :\u02c6\u03bbA\u03bbB(Pres ((\u02c7lose A) B)) loves : (( \u2203gNt(s(g))\\Sf )/\u2203aNa) :\u02c6\u03bbA\u03bbB(Pres ((\u02c7love A) B)) man : CNs(m) : man necessarily : (SA/ SA) : Nec or :",
"uris": null,
"num": null
},
"FIGREF8": {
"type_str": "figure",
"text": "\u03bbAA the : \u2200n(Nt(n)/CNn) : \u03b9 to : ((PPto/\u2203aNa) \u2200n(( Nn\\Si)/( Nn\\Sb))) : \u03bbAA tries : (( \u2203gNt(s(g))\\Sf )/ ( \u2203gNt(s(g))\\Si)) :\u02c6\u03bbA\u03bbB((\u02c7tries\u02c6(\u02c7A B)) B) unicorn : CNs(n) : unicorn walk : ( \u2203aNa\\Sb) :\u02c6\u03bbA(\u02c7walk A) walks : ( \u2203gNt(s(g))\\Sf ) :\u02c6\u03bbA(Pres (\u02c7walk A)) woman : CNs(f ) : womanFigure 14Montague Test lexicon.",
"uris": null,
"num": null
},
"FIGREF9": {
"type_str": "figure",
"text": "Nt(s(m)) \u21d2 Nt(s(m)) \u2295R Nt(s(m)) \u21d2 Nt(s(m))\u2295(CNA/CNA)Nt(s(m)) \u21d2 Nt(s(m)) Sf \u21d2 Sf \\L Nt(s(m)), Nt(s(m))\\Sf \u21d2 Sf /L Nt(s(m)), (Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)) , Nt(s(m)) \u21d2 Sf CNA \u21d2 CNA CNA \u21d2 CNA /L CNA/CNA , CNA \u21d2 CNA /R CNA/CNA \u21d2 CNA/CNA \u2295R CNA/CNA \u21d2 Nt(s(m))\u2295(CNA/CNA) Nt(s(m)) \u21d2 Nt(s(m)) Sf \u21d2 Sf \\L Nt(s(m)), Nt(s(m))\\Sf \u21d2 Sf /L Nt(s(m)), (Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)) , CNA/CNA \u21d2 Sf \u2295L Nt(s(m)), (Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)), Nt(s(m))\u2295(CNA/CNA) \u21d2 Sf \\R (Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)), Nt(s(m))\u2295(CNA/CNA) \u21d2 Nt(s(m))\\Sf /R (Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)) \u21d2 (Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)) 1 CNA \u21d2 CNA CNA \u21d2 CNA /L CNA/CNA , CNA \u21d2 CNA /R CNA/CNA \u21d2 CNA/CNA \u2295R CNA/CNA \u21d2 Nt(s(m))\u2295(CNA/CNA) Nt(s(m)) \u21d2 Nt(s(m)) Sf \u21d2 Sf \\L Nt(s(m)), Nt(s(m))\\Sf \u21d2 Sf /L Nt(s(m)), (Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)) , CNA/CNA \u21d2 Sf \\R (Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)), CNA/CNA \u21d2 Nt(s(m))\\Sf \\R CNA/CNA \u21d2 ((Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)))\\(Nt(s(m))\\Sf ) Nt(s(m)) \u21d2 Nt(s(m)) \u2295R Nt(s(m)) \u21d2 Nt(s(m))\u2295(CNA/CNA) Nt(s(m)) \u21d2 Nt(s(m)) Sf \u21d2 Sf \\L Nt(s(m)), Nt(s(m))\\Sf \u21d2 Sf /L Nt(s(m)), (Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)) , Nt(s(m)) \u21d2 Sf \\R (Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)), Nt(s(m)) \u21d2 Nt(s(m))\\Sf \\R Nt(s(m)) \u21d2 ((Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)))\\(Nt(s(m))\\Sf ) 1 Nt(s(m)) \u21d2 Nt(s(m)) Sf \u21d2 Sf\\L Nt(s(m)), Nt(s(m))\\Sf \u21d2 Sf \\L Nt(s(m)), (Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)), ((Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)))\\(Nt(s(m))\\Sf ) \u21d2 Sf \\L Nt(s(m)), (Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)), Nt(s(m)), ((",
"uris": null,
"num": null
},
"FIGREF10": {
"type_str": "figure",
"text": ")\\Sf )) \u21d2 Sf /L Nt(s(m)), (Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)), Nt(s(m)), ((((Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)))\\(Nt(s(m))\\Sf ))\\(((Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)))\\(Nt(s(m))\\Sf )))/(((Nt(s(m))\\Sf )/(Nt(s(m))\u2295(CNA/CNA)))\\(Nt(s(m))\\Sf )) , CNA/CNA \u21d2 SfFigure 15Coordination of unlike types.",
"uris": null,
"num": null
},
"TABREF0": {
"text": "Lambek calculus types have the following interpretation in semigroups or monoids:",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>Types</td><td>F ::= P | F\\F | F/F | F\u2022F</td></tr><tr><td>Configurations</td><td>O ::= F | F, O</td></tr><tr><td>Sequents</td><td>Seq(L) ::= O \u21d2 F</td></tr></table>"
},
"TABREF1": {
"text": "The table inExample (32) summarizes the notational convention on arbitrary (atomic or compound) polarized formulas P and Q: If reversible output types Q occur in input position, then they are irreversible; whereas if reversible output types P occur in input position, then they are irreversible.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>(32)</td><td/><td/></tr><tr><td/><td colspan=\"2\">input output</td></tr><tr><td>rev.</td><td>P</td><td>Q</td></tr><tr><td colspan=\"2\">irrev. Q</td><td>P</td></tr><tr><td colspan=\"3\">Notice that in Example (32) arbitrary polarized types exhibit a kind of De Morgan</td></tr><tr><td>duality:</td><td/><td/></tr></table>"
},
"TABREF2": {
"text": "On the contrary, if we have the biases n, s \u2208 At \u2022 and q \u2208 At \u2022 we have the following blocked derivation: *",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>n\\s, s\\q \u21d2 q n, n\\s, s\\q \u21d2 q</td><td>foc</td></tr><tr><td colspan=\"2\">n, n\\s \u21d2 s n, n\\s, s\\q \u21d2 q q \u21d2 q foc n, n\\s, s\\q \u21d2 q</td><td>\\L</td></tr><tr><td>Whereas we have:</td><td/></tr></table>"
}
}
}
}