ACL-OCL / Base_JSON /prefixJ /json /J18 /J18-4001.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J18-4001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:20:17.330019Z"
},
"title": "Lifetime Achievement Award",
"authors": [
{
"first": "The",
"middle": [
"Lost"
],
"last": "Combinator",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh Informatics Forum",
"location": {}
},
"email": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh Informatics Forum",
"location": {}
},
"email": "steedman@inf.ed.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Let me begin by thanking the Association for Computational Linguistics and its Executive Committee for conferring on me the great honor of their Lifetime Achievement Award for 2018, which of course I share with all the wonderful students and colleagues that have made many essential contributions to this work over many years. At the heart of the work that I have been pursuing over my research lifetime so far, whether in parsing and sentence processing, spoken language understanding, semantics, or even in musical understanding by machine, there lies a theory of natural language grammar that brings parsing, compositional semantics, statistical modeling, and logical inference into the closest possible relation. This theory of grammar is combinatory, in the sense that its operations are type-dependent and restricted to strictly string-adjacent phonologically or graphologically-realized inputs, and categorial, in the sense that those operands pair a syntactic type with a type-transparent semantic representation or logical form. I'd like to use this opportunity to briefly address three questions that revolve around the theory of grammar, both combinatory and otherwise. The first question concerns the way that Combinatory Categorial Grammar (CCG) was developed with a number of colleagues, over a number of stages and in slightly different forms. The second is an essentially evolutionary question of why natural language grammar should take a combinatory form. The third question is that of what the future holds for CCG and other structural theories of grammar in computational linguistics and NLP in the age of deep learning. I have called this talk \"The Lost Combinator\" in homage to the Victorian era poem \"The Lost Chord,\" in the hope of suggesting that the theoretical development of CCG has always been empirical, rather than axiomatic, in search of the simplest explanation of the facts of language, rather than for confirmation of linguistic received opinion, however intuitively salient.",
"pdf_parse": {
"paper_id": "J18-4001",
"_pdf_hash": "",
"abstract": [
{
"text": "Let me begin by thanking the Association for Computational Linguistics and its Executive Committee for conferring on me the great honor of their Lifetime Achievement Award for 2018, which of course I share with all the wonderful students and colleagues that have made many essential contributions to this work over many years. At the heart of the work that I have been pursuing over my research lifetime so far, whether in parsing and sentence processing, spoken language understanding, semantics, or even in musical understanding by machine, there lies a theory of natural language grammar that brings parsing, compositional semantics, statistical modeling, and logical inference into the closest possible relation. This theory of grammar is combinatory, in the sense that its operations are type-dependent and restricted to strictly string-adjacent phonologically or graphologically-realized inputs, and categorial, in the sense that those operands pair a syntactic type with a type-transparent semantic representation or logical form. I'd like to use this opportunity to briefly address three questions that revolve around the theory of grammar, both combinatory and otherwise. The first question concerns the way that Combinatory Categorial Grammar (CCG) was developed with a number of colleagues, over a number of stages and in slightly different forms. The second is an essentially evolutionary question of why natural language grammar should take a combinatory form. The third question is that of what the future holds for CCG and other structural theories of grammar in computational linguistics and NLP in the age of deep learning. I have called this talk \"The Lost Combinator\" in homage to the Victorian era poem \"The Lost Chord,\" in the hope of suggesting that the theoretical development of CCG has always been empirical, rather than axiomatic, in search of the simplest explanation of the facts of language, rather than for confirmation of linguistic received opinion, however intuitively salient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the late 1960s (when I was a psychology undergraduate at the University of Sussex under Stuart Sutherland, and then started as a graduate student in artificial intelligence at Edinburgh under Christopher Longuet-Higgins), a broad community of theoretical linguists, psychologists, and computational linguists saw themselves as all working on the same problem, under the definition provided by the \"transformational\" theory of grammar proposed by Chomsky (1957 Chomsky ( , 1965 , using theories of psycholinguistic processing, language acquisition, and language evolution proposed by Lashley (1951) , Miller, Galanter, and Pribram (1960) , Miller (1967) , and Lenneberg (1967) , theories of natural language semantics proposed by Carnap (1956) , Montague (1970) , and Lewis (1970) , and computational models of parsing such as those proposed by Thorne, Bratley, and Dewar (1968) and Woods (1970) . (I myself was so convinced that this program would succeed that I believed it was time to apply the same methods to other cognitive faculties, taking as my research project for Ph.D. their application to the interpretation of music by machine, following the lead of Max Clowes [1971] in machine vision.)",
"cite_spans": [
{
"start": 449,
"end": 462,
"text": "Chomsky (1957",
"ref_id": "BIBREF12"
},
{
"start": 463,
"end": 479,
"text": "Chomsky ( , 1965",
"ref_id": "BIBREF13"
},
{
"start": 586,
"end": 600,
"text": "Lashley (1951)",
"ref_id": "BIBREF44"
},
{
"start": 603,
"end": 639,
"text": "Miller, Galanter, and Pribram (1960)",
"ref_id": "BIBREF57"
},
{
"start": 642,
"end": 655,
"text": "Miller (1967)",
"ref_id": "BIBREF56"
},
{
"start": 662,
"end": 678,
"text": "Lenneberg (1967)",
"ref_id": "BIBREF45"
},
{
"start": 732,
"end": 745,
"text": "Carnap (1956)",
"ref_id": "BIBREF8"
},
{
"start": 748,
"end": 763,
"text": "Montague (1970)",
"ref_id": "BIBREF59"
},
{
"start": 770,
"end": 782,
"text": "Lewis (1970)",
"ref_id": "BIBREF46"
},
{
"start": 847,
"end": 880,
"text": "Thorne, Bratley, and Dewar (1968)",
"ref_id": "BIBREF82"
},
{
"start": 885,
"end": 897,
"text": "Woods (1970)",
"ref_id": "BIBREF89"
},
{
"start": 1170,
"end": 1183,
"text": "Clowes [1971]",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "In the Beginning",
"sec_num": "1."
},
{
"text": "Almost immediately, this consensus fell apart. First, Chomsky himself was among the first (1965) to recognize that transformational rules, though descriptively revealing, were so expressive as to have little explanatory force, and required many apparently arbitrary constraints (Ross, 1967) . Second, psychologists realized that psycholinguistic measures of processing difficulty of sentences bore almost no relation to their transformational derivational complexity (Marslen-Wilson 1973; Fodor, Bever, and Garrett, 1974) . Finally, computational linguists attempting to implement transformational grammars as parsers realized that they were spending all their time implementing even more constraints on rules, in order to limit search arising from overgeneration (Friedman 1971; Gross 1978) . (Meanwhile, I realized that the problem had not in fact been solved, and returned to natural language processing, thanks to a postdoc at Sussex with Philip Johnson-Laird.) This disillusion wasn't just a case of internal academic squabbling. There were also a couple of influential reports commissioned by the U.S. and UK governments that ended funding for machine translation (MT) and artificial intelligence (AI) (Pierce et al. 1966; Lighthill 1973) . As a result of the second of these reports, which determined that AI was never going to work, PhDs in artificial intelligence like my classmate Geoff Hinton and myself spent ten years or so after graduation in psychology departments (in my case, at the Universities of Sussex and Warwick), until yet another report said AI was working after all and that Britain and the U.S. were falling behind Japan in this vital area. As a result, I could get hired again in computer science, first briefly back at Edinburgh, and then at the University of Pennsylvania (I learned a lesson from this odyssey that I have tried to remember whenever I have been appointed to a committee to report on anything, which is that while reports very rarely do any good, they can very easily do a great deal of harm.)",
"cite_spans": [
{
"start": 90,
"end": 96,
"text": "(1965)",
"ref_id": null
},
{
"start": 278,
"end": 290,
"text": "(Ross, 1967)",
"ref_id": "BIBREF71"
},
{
"start": 467,
"end": 488,
"text": "(Marslen-Wilson 1973;",
"ref_id": "BIBREF52"
},
{
"start": 489,
"end": 521,
"text": "Fodor, Bever, and Garrett, 1974)",
"ref_id": "BIBREF20"
},
{
"start": 764,
"end": 779,
"text": "(Friedman 1971;",
"ref_id": "BIBREF21"
},
{
"start": 780,
"end": 791,
"text": "Gross 1978)",
"ref_id": "BIBREF25"
},
{
"start": 950,
"end": 965,
"text": "Johnson-Laird.)",
"ref_id": null
},
{
"start": 1208,
"end": 1228,
"text": "(Pierce et al. 1966;",
"ref_id": "BIBREF67"
},
{
"start": 1229,
"end": 1244,
"text": "Lighthill 1973)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "In the Beginning",
"sec_num": "1."
},
{
"text": "Meanwhile, as a result of these conflicts, the scientific study of language fragmented. The linguists swiftly abjured any responsibility for their grammars (\"Competence\") bearing any relation to processing (\"Performance\"). Because the psychologists could hardly abandon Performance, they in turn became agnostic about grammar, retreating to context-free surface grammar (which they tended to refer to as \"parsing strategies\"), or a touchingly optimistic belief in its emergence from neural models. Meanwhile, the computational linguists (whose machines were growing exponentially in size and speed from the 16K byte core of the machine that supported the whole group when I started my graduate studies, on to levels that would soon permit parsing the entire contents of the then embrionic Web) similarly found that very little of what the linguists and psychologists cared about was usable at scale, and that none of it significantly improved overall performance over very much simpler context-free or even finite-state methods that the linguists had shown to be incomplete. The reason of course was Zipf's law, which means that the events with respect to which the low-level methods are incomplete are off in the long tail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In the Beginning",
"sec_num": "1."
},
{
"text": "It also became apparent to a few computationalists working on speech, MT, and information retrieval that the real problem was not grammar but ambiguity and its resolution by world-knowledge, and that the solution lay in probabilistic models (Bar-Hillel 1960 Sp\u00e4rck Jones 1964 Wilks 1975; Jelinek and Lafferty 1991 ) (although it was not immediately apparent how to combine statistical models with grammar-based systems without making obviously false independence assumptions).",
"cite_spans": [
{
"start": 241,
"end": 257,
"text": "(Bar-Hillel 1960",
"ref_id": null
},
{
"start": 258,
"end": 275,
"text": "Sp\u00e4rck Jones 1964",
"ref_id": "BIBREF74"
},
{
"start": 276,
"end": 287,
"text": "Wilks 1975;",
"ref_id": "BIBREF87"
},
{
"start": 288,
"end": 313,
"text": "Jelinek and Lafferty 1991",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "In the Beginning",
"sec_num": "1."
},
{
"text": "Nevertheless, as any red-blooded psychologist had always insisted, the divorce between competence and performance that everyone else had accepted did not make any sense. The grammar and the processor had to have evolved in lock-step, as a package deal, for what could be the evolutionary selective advantage of a grammar that you cannot process, or a parser without a grammar?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In the Beginning",
"sec_num": "1."
},
{
"text": "It seemed equally obvious that surface syntax and the underlying semantic or conceptual representation must also be closely related, since the only reasonable basis for child language acquisition that has ever been on offer is that the child attaches language-specific grammar to a universal conceptual relation or \"language of mind\" (Miller 1967; Bowerman 1973; Wexler and Culicover 1980) . It seemed to follow that radically new theories of grammar were needed.",
"cite_spans": [
{
"start": 334,
"end": 347,
"text": "(Miller 1967;",
"ref_id": "BIBREF56"
},
{
"start": 348,
"end": 362,
"text": "Bowerman 1973;",
"ref_id": "BIBREF6"
},
{
"start": 363,
"end": 389,
"text": "Wexler and Culicover 1980)",
"ref_id": "BIBREF85"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "In the Beginning",
"sec_num": "1."
},
{
"text": "Theoretical linguists agree that the central problem for the theory of grammar is discontinuity or non-adjacent dependency between predicates and their arguments:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Problem of Discontinuity",
"sec_num": "2."
},
{
"text": "(1) Chomsky described discontinuity in terms of movement, which was known to be formally very unconstrained. By contrast, the ATN parser used in the LUNAR project (Woods, Kaplan, and Nash-Webber 1972) reduced all discontinuity to local operations on registers (Thorne, Bratley, and Dewar 1968; Bobrow and Fraser 1969; Woods 1970) .",
"cite_spans": [
{
"start": 163,
"end": 200,
"text": "(Woods, Kaplan, and Nash-Webber 1972)",
"ref_id": "BIBREF91"
},
{
"start": 260,
"end": 293,
"text": "(Thorne, Bratley, and Dewar 1968;",
"ref_id": "BIBREF82"
},
{
"start": 294,
"end": 317,
"text": "Bobrow and Fraser 1969;",
"ref_id": "BIBREF5"
},
{
"start": 318,
"end": 329,
"text": "Woods 1970)",
"ref_id": "BIBREF89"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Problem of Discontinuity",
"sec_num": "2."
},
{
"text": "In particular, unbounded wh-dependencies like the above were handled by: (a) putting a pointer into a * or HOLD register as soon as the \"which\" was encountered without regard to where it would end up; and (b) retrieving the pointer from HOLD when the verb needing an object \"had\" was encountered without regard to where it had started out. (It also included an ingenious mechanism for coordination called SYSCONJ, which one finds even now being reinvented on an almost yearly basis-cf. Woods [2010] .) A * register was also used for wh-constructions within a systemic grammar framework by Winograd (1972, pages 52-53) in his inspiring conversational program SHRDLU.",
"cite_spans": [
{
"start": 486,
"end": 498,
"text": "Woods [2010]",
"ref_id": "BIBREF90"
},
{
"start": 589,
"end": 617,
"text": "Winograd (1972, pages 52-53)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Problem of Discontinuity",
"sec_num": "2."
},
{
"text": "However, it was unclear how to generalize the HOLD register to handle the multiple long-range dependencies, including crossing dependencies, that are found in many other languages. In particular, if the HOLD register were assumed to be a stack, then the ATN becomes a two-stack machine (since we are already implicitly using one stack as a PDA to parse the context-free core grammar).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Problem of Discontinuity",
"sec_num": "2."
},
{
"text": "On the computational side at least, the reaction to this impass took two distinct forms. Both reactions took the form of trying to reduce the two major operators of the transformation theory, substitution of immediate constituents, or what is nowadays called \"Merge,\" and \"Move,\" or displacement of non-immediate constituents, to one. On the one hand, Lexical Functional Grammar (Bresnan and Kaplan 1982) and Headdriven Phrase Structure Grammar (Pollard and Sag 1994) followed Kay (1979) in making unification the basis of movement and merger. Because unification can pass information across unbounded structures, this can be thought of as reducing Merge to Move.",
"cite_spans": [
{
"start": 379,
"end": 404,
"text": "(Bresnan and Kaplan 1982)",
"ref_id": "BIBREF7"
},
{
"start": 445,
"end": 467,
"text": "(Pollard and Sag 1994)",
"ref_id": "BIBREF68"
},
{
"start": 477,
"end": 487,
"text": "Kay (1979)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Problem of Discontinuity",
"sec_num": "2."
},
{
"text": "On the other hand, Generalized Phrase Structure Grammar (Gazdar 1981) , Tree Adjoining Grammar (TAG; Joshi and Levy 1982) , and Combinatory Categorial Grammar (CCG, Ades and Steedman, 1982) sought to reduce Move to various forms of local merger. In particular, the latter authors suggested that the same stack could be used to capture both long-range dependency and recursion in CCG. 1",
"cite_spans": [
{
"start": 56,
"end": 69,
"text": "(Gazdar 1981)",
"ref_id": "BIBREF22"
},
{
"start": 101,
"end": 121,
"text": "Joshi and Levy 1982)",
"ref_id": "BIBREF33"
},
{
"start": 165,
"end": 189,
"text": "Ades and Steedman, 1982)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Problem of Discontinuity",
"sec_num": "2."
},
{
"text": "Natural language grammar exhibits discontinuity because semantically language is an applicative system. Applicative systems (such as programming languages) support the twin notions of: (a) Application of a function/concept to an argument/entity; and (b) Abstraction, or the definition of a new function/concept in terms of existing ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Does Natural Language Allow Discontinuity?",
"sec_num": "3."
},
{
"text": "Language is in that sense inherently computational. It seems to follow that linguistics is (or should be) inherently computational as well. (Of course, it does not follow that computationalists have nothing to learn from linguistics.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Does Natural Language Allow Discontinuity?",
"sec_num": "3."
},
{
"text": "There are two ways of modeling abstraction in applicative systems: Taking abstraction itself as a primitive operation (\u03bb-calculus, LISP):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Does Natural Language Allow Discontinuity?",
"sec_num": "3."
},
{
"text": "a. father Esau \u21d2 Isaac b. grandfather = \u03bbx.father ( father x) c. grandfather Esau \u21d2 Abraham (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Does Natural Language Allow Discontinuity?",
"sec_num": "3."
},
{
"text": "or Defining abstraction in terms of a collection of operators on strictly adjacent terms aka Combinators, such as function composition (Combinatory Calculus, MIRANDA).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Does Natural Language Allow Discontinuity?",
"sec_num": "3."
},
{
"text": "b . grandfather = B father father",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Does Natural Language Allow Discontinuity?",
"sec_num": "3."
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Does Natural Language Allow Discontinuity?",
"sec_num": "3."
},
{
"text": "The latter does the work of the \u03bb-calculus without using any variables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Does Natural Language Allow Discontinuity?",
"sec_num": "3."
},
{
"text": "Despite the resemblance of the \"traces\" (or copies) and \"operators\" (or complementizer positions) of the transformational theory to the \u03bb-operators and variables of applicative systems of the first kind, natural language actually seems to be a system of the second, combinatory kind. The evidence stems from the fact that natural language deals with all sorts of fragments that linguists do not normally think of as semantically typable constituents, without the use of any phonologically realized equivalent of variables, such as pronouns: 4These fragments are diagnostic of a Combinatory Calculus based on B n , T, and the \"duplicator\" S n , plus application (Steedman 1987; Szabolcsi 1989 ; Steedman and Baldridge 2011). 2",
"cite_spans": [
{
"start": 661,
"end": 676,
"text": "(Steedman 1987;",
"ref_id": "BIBREF75"
},
{
"start": 677,
"end": 691,
"text": "Szabolcsi 1989",
"ref_id": "BIBREF80"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Why Does Natural Language Allow Discontinuity?",
"sec_num": "3."
},
{
"text": "a.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Does Natural Language Allow Discontinuity?",
"sec_num": "3."
},
{
"text": "CCG lexicalizes all bounded dependencies, such as passive, raising, control, exceptional case-marking, and so forth, via lexical logical form. All syntactic rules are Combinatory-that is, binary operators over contiguous phonologically realized categories and their logical forms. These rules are restricted by a Combinatory Projection Principle, which in essence says they cannot override the decisions already taken in the language-specific lexicon, but must be consistent with and project unchanged the directionality specified there. All such language-specific information is specified in the lexicon: The combinatory rules like composition are free and universal. All arguments, such as subjects and objects, are lexically type-raised to be functions over the predicate, as if they were morphologically cased as in Latin, exchanging the roles of predicate and argument.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Categorial Grammar (CCG)",
"sec_num": "4."
},
{
"text": "All long-range dependencies are established by contiguous reduction of a whelement, such as (N\\N)/(S/NP), with an adjacent non-standard constituent with category S/NP, formed by rules of function composition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Categorial Grammar (CCG)",
"sec_num": "4."
},
{
"text": "(5) 6The combinatory rules synchronize composition of the syntactic types shown here with corresponding composition of logical forms (suppressed in the derivations above), to yield the logical forms shown as \u03bb-terms for the resulting nouns N.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Categorial Grammar (CCG)",
"sec_num": "4."
},
{
"text": "To capture the construction in Example (4c), whose syntactic derivation we pass over here, we also need rules based on the duplicator S: a. \"wash X before eating X b. VP/NP : \u03bbx.before (eatx)(washx) \u2261 S (B before eat) wash",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Categorial Grammar (CCG)",
"sec_num": "4."
},
{
"text": "To capture constructions like Example (4d) (whose syntactic derivation is similarly suppressed), we also need rules based on second-order composition B 2 : a. \"Y saw X teach W to sing. b. ((S\\NP)\\NP)\\NP : \u03bbw\u03bbx\u03bby.help (teach (sing w) wx) xy \u2261 B 2 sees (B teach sing)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Categorial Grammar (CCG)",
"sec_num": "4."
},
{
"text": "CCG thus reduces the operator MOVE of transformational theory to applications of purely adjacent operators-that is, to recursive combinatory MERGE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Categorial Grammar (CCG)",
"sec_num": "4."
},
{
"text": "Interestingly, the latest \"minimalist\" form of the transformational theory has also proposed that Move should be relabeled as an \"internal\" form of standard or \"external\" Merge (Chomsky 2001 (Chomsky /2004 , though without providing any formal basis for the reduction other than identifying internal Merge as \"a grammatical transformation.\" (If anything deserved the soubriquet \"the lost combinator,\" it would be this notional unitary combination of application and abstraction in a single perfect operator, linking or merging all types, as in the epigraph to this article.)",
"cite_spans": [
{
"start": 177,
"end": 190,
"text": "(Chomsky 2001",
"ref_id": null
},
{
"start": 191,
"end": 205,
"text": "(Chomsky /2004",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Categorial Grammar (CCG)",
"sec_num": "4."
},
{
"text": "B 2 rules allow us to \"grow\" categories of arbitrarily high valency, such as ((S\\NP y )\\NP x )\\ NP w . As we saw earlier, in some Germanic languages like Dutch, Swiss German, and West-Flemish, serial verbs are linearized using such rules to require crossing discontinuous dependencies. Thus, B 2 rules give CCG slightly greater than context-free power.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expressivity of CCG",
"sec_num": "4.1"
},
{
"text": "Nevertheless, CCG is still not as expressive as movement. In particular, we can only capture permutations that are what is called \"separable,\" where separability is related to the idea of obtaining the permutations by rebracketing and rotating sister nodes (Steedman 2018) .",
"cite_spans": [
{
"start": 257,
"end": 272,
"text": "(Steedman 2018)",
"ref_id": "BIBREF78"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Expressivity of CCG",
"sec_num": "4.1"
},
{
"text": "For example, for the categories of the form A|B, B|C, C|D, and D, it is obvious by inspection that we cannot recognize the following permutations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expressivity of CCG",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i. * B|C D A|B C|D ii. * C|D A|B D B|C",
"eq_num": "(9)"
}
],
"section": "Expressivity of CCG",
"sec_num": "4.1"
},
{
"text": "This generalizatiion appears likely to be true cross-linguistically for the components of this form for the NP \"These five young boys\": i. *Five boys these young ii. *Young these boys five (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expressivity of CCG",
"sec_num": "4.1"
},
{
"text": "Twenty-one of the 22 separable permutations of \"These five young boys\" are attested (Cinque 2005; Nchare 2012 ). The two forbidden orders are among the unattested three. 3 The probability of this happening by chance is the prior probability of the hypothesis itself-that is the reciprocal of 24 choose 2-times 3 choose 2, the number of ways of predicting two impossible orderings out of three unattested ones, given by the following:",
"cite_spans": [
{
"start": 84,
"end": 97,
"text": "(Cinque 2005;",
"ref_id": "BIBREF14"
},
{
"start": 98,
"end": 109,
"text": "Nchare 2012",
"ref_id": "BIBREF62"
},
{
"start": 170,
"end": 171,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Expressivity of CCG",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p = 3 2 24 2 = (3 * 2)/2 (24 * 23)/2 = 6 552 \u2248 0.01",
"eq_num": "(11)"
}
],
"section": "Expressivity of CCG",
"sec_num": "4.1"
},
{
"text": "-that is, about one in a hundred. (If the sole predicted order that remains unattested so far were to be attested, this chance would fall to about one in 250.) The number of separable permutations grows much more slowly in n than n!, the number of all permutations. For example, for n = 8, around 80% of the permutations are non-separable. There are obvious implications for the problem of alignment in machine translation and neural semantic parsing, to which we will return.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expressivity of CCG",
"sec_num": "4.1"
},
{
"text": "In 1986, I moved to Computer and Information Science at Penn, first as a visitor (incredibly, partly still under Sloan funding-at least, that was what Aravind Joshi told me), and then as a member of faculty, where much of the further development of CCG was worked out. Crucially, Aravind's students proved in a series of papers (Vijay-Shanker, Weir, and Joshi 1987; Joshi, Vijay-Shanker, and Weir 1991; passim) that the \"shared stack\" claim of Ades and Steedman (1982) was correct, by showing that both CCG and Aravind Joshi's TAG were weakly equivalent to Linear Indexed Grammar (Gazdar 1988 ), a new level of the Language Hierarchy characterized by the (Linear) Embedded Push-down Automaton (cf. Kuhlmann, Koller, and Satta 2015) .",
"cite_spans": [
{
"start": 444,
"end": 468,
"text": "Ades and Steedman (1982)",
"ref_id": "BIBREF1"
},
{
"start": 580,
"end": 592,
"text": "(Gazdar 1988",
"ref_id": "BIBREF23"
},
{
"start": 698,
"end": 731,
"text": "Kuhlmann, Koller, and Satta 2015)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Expressivity of CCG",
"sec_num": "4.1"
},
{
"text": "The class of languages characterizable by these formalisms fell within the requirements of what Joshi (1988) called \"mild context sensitivity\" (MCS), which proposed as a criterion for what could count as a computationally \"reasonable\" theory of natural languages-informally speaking, that they are polynomially recognizable, and exhibit constant growth and some limit on crossing dependencies. However, the MCS class is much much larger than the CCG/TAG languages, including the multiple context free languages and even (under certain further assumptions) the languages of Chomskian minimalism, so it seems appropriate to distinguish TAG and CCG as \"slightly noncontext-free\" (SNCF, with apologies to the French railroad company).",
"cite_spans": [
{
"start": 96,
"end": 108,
"text": "Joshi (1988)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Expressivity of CCG",
"sec_num": "4.1"
},
{
"text": "Despite its SNCF complexity, CCG was originally assumed to be totally unpromising as a grammar formalism for parsing, because of the extra derivational ambiguity introduced by type-raising and the combinatory rules, particularly composition. However, these supposedly \"spurious\" constituents also show up under coordination, and as intonational phrases. So any grammar with the same coverage as CCG will engender the same degree of nondeterminism in the parser (because it is there in the grammar). In fact, this is just another drop in the ocean of derivational ambiguity that faces all natural language processors, and can be handled by exactly the same statistical models as other varieties.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CCG for Natural Language Processing",
"sec_num": "4.2"
},
{
"text": "In particular, the head-word dependency models pioneered by Don Hindle, Mats Rooth, Mike Collins, and Eugene Charniak are straightforwardly applicable (Hockenmaier and Steedman 2002b; Clark and Curran 2004) . CCG is also particularly well-adapted to parsing with \"supertagger\" front ends, which can be optimized using embeddings and long short-term memory (LSTM) (Lewis and Steedman 2014; Lewis, Lee, and Zettlemoyer 2016).",
"cite_spans": [
{
"start": 151,
"end": 183,
"text": "(Hockenmaier and Steedman 2002b;",
"ref_id": "BIBREF28"
},
{
"start": 184,
"end": 206,
"text": "Clark and Curran 2004)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CCG for Natural Language Processing",
"sec_num": "4.2"
},
{
"text": "CCG is now quite widely used in applications, especially those that call for transparency between semantic and syntactic processing, and/or dislocation, such as machine translation (Birch and Osborne 2011; Mehay and Brew 2012), machine reading (Krishnamurthy and Mitchell 2014) , incremental parsing (Pareschi and Steedman 1987; Niv 1994; Xu, Clark, and Zhang 2014; Ambati et al. 2015) , human-robot interaction (Chai et al. 2014; Matuszek et al. 2013) , and semantic parser induction (Zettlemoyer and Collins 2005; Kwiatkowski et al. 2010; Abend et al. 2017) . Some of my own work with the same techniques has returned to their application in musical analysis (Granroth-Wilding and Clark 2014; McLeod and Steedman 2016), and shown that CCG grammars of the same SNCF class and parsing models of the same statistical kind are required there as well: It is only in the details of their compositional semantics that music and language differ very greatly.",
"cite_spans": [
{
"start": 244,
"end": 277,
"text": "(Krishnamurthy and Mitchell 2014)",
"ref_id": "BIBREF39"
},
{
"start": 300,
"end": 328,
"text": "(Pareschi and Steedman 1987;",
"ref_id": "BIBREF65"
},
{
"start": 329,
"end": 338,
"text": "Niv 1994;",
"ref_id": "BIBREF63"
},
{
"start": 339,
"end": 365,
"text": "Xu, Clark, and Zhang 2014;",
"ref_id": "BIBREF91"
},
{
"start": 366,
"end": 385,
"text": "Ambati et al. 2015)",
"ref_id": "BIBREF2"
},
{
"start": 412,
"end": 430,
"text": "(Chai et al. 2014;",
"ref_id": "BIBREF10"
},
{
"start": 431,
"end": 452,
"text": "Matuszek et al. 2013)",
"ref_id": "BIBREF53"
},
{
"start": 485,
"end": 515,
"text": "(Zettlemoyer and Collins 2005;",
"ref_id": "BIBREF91"
},
{
"start": 516,
"end": 540,
"text": "Kwiatkowski et al. 2010;",
"ref_id": "BIBREF42"
},
{
"start": 541,
"end": 559,
"text": "Abend et al. 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CCG for Natural Language Processing",
"sec_num": "4.2"
},
{
"text": "Rather than reflecting on this substantial body of work in detail, I'd like to conclude by examining two further more speculative questions. The first is an evolutionary question: Why should natural language be a combinatory calculus in the first place? The second is a question about the future development of our subject: Will CCG and other grammar-based theories continue to be relevant to NLP in the age of deep learning and recursive neural networks? I'll take these questions in order in the next two sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CCG for Natural Language Processing",
"sec_num": "4.2"
},
{
"text": "Natural language looks like a distinctively combinatory applicative system because B, T, and S evolved independently, in our animal ancestors, to support planning of action sequences before there was any language (Steedman 2002) . Even pure reactive planning animals like pigeons need application. Composition B and Substitution S are needed for seriation of actions. (Even rats need to compose sensory-motor actions to make plans.) Macaques can form the concept \"food that you need to wash before eating\"). Type-raising T is needed to map tools onto actions that they allow (affordances). (Chimpanzees and some other animals can plan with tools.) Second-order combinators like B 2 are needed to make plans with arbitrary numbers of entities, including non-present tools, and crucially including other agents whose cooperation is yet to be obtained. Only humans seem to be able to do the latter. Thus, chimpanzees solve the (mistitled) monkey and bananas problem, using tools like old crates to gain altitude in order to obtain objects that are otherwise out of reach (K\u00f6hler 1925) (Figure 1(a) ).",
"cite_spans": [
{
"start": 213,
"end": 228,
"text": "(Steedman 2002)",
"ref_id": "BIBREF77"
},
{
"start": 1068,
"end": 1081,
"text": "(K\u00f6hler 1925)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 1082,
"end": 1094,
"text": "(Figure 1(a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Why is Language Combinatory?",
"sec_num": "5."
},
{
"text": "K\u00f6hler's chimpanzee Grande's planning amounts to composing (B) affordances (T) of crates and so on, such as actions of moving, stacking, and climbing-on them. Similarly, macacques can apply S (B before eat) wash to a sweet potato (Kawai 1965) (Figure 1(b) ). Interestingly, K\u00f6hler and much subsequent work showed that the crates and other tools have to be there in the situation already for the animal to be able to plan with them: Grande was unable to achieve plans that involve fetching crates from the next room, even if she had recently seen them there.",
"cite_spans": [
{
"start": 230,
"end": 242,
"text": "(Kawai 1965)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 243,
"end": 255,
"text": "(Figure 1(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Why is Language Combinatory?",
"sec_num": "5."
},
{
"text": "More generally, the problem of planning can be viewed as the problem of search for a sequence of actions in a lattice or labyrinth of possible states. Crucially, such search has the same recursive character as parser search. For example, there are both chart-based dynamic-programming and stack-based shift-reduce-style algorithms for the purpose. The latter seem a more plausible alternative in evolutionary terms, as affording a mechanism that is common to both semantic interpretation and parsing, rather than requiring the independent evolution of something like the CKY algorithm. 4 Planning therefore provides the infrastructure for linguistic performance, as well as the operators for competence grammar, allowing the two to emerge together, as what was referred to above as an evolutionary \"package deal.\"",
"cite_spans": [
{
"start": 586,
"end": 587,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Why is Language Combinatory?",
"sec_num": "5."
},
{
"text": "I hope to have convinced you that CCG grammars are both linguistically and evolutionarily explanatory, as well as supporting practical semantic parsers that are fast enough to parse the Web.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CCG in the Age of Deep Learning",
"sec_num": "6."
},
{
"text": "Nevertheless, like other supervised parsers, CCG parsers are limited by the weakness of parsing models based on only a million words of WSJ training data, no matter how cleverly we smooth them using synthetic data and embeddings. Moreover, the constructions they are specifically needed for-unbounded dependencies and so forth-are, as we have noted, rare, and off in the long tail. As a consequence, supervised parsers are easy to equal on performance overall by using end-to-end models (Vinyals et al., 2015) .",
"cite_spans": [
{
"start": 487,
"end": 509,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF84"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CCG in the Age of Deep Learning",
"sec_num": "6."
},
{
"text": "This outcome says more about the weakness of treebank grammars and parsers than about the strength of deep learning. Nevertheless, in the area of semantic parser induction for arbitrary knowledge graphs such as FreeBase (Reddy, Lapata, and Steedman 2014) , I would say that CCG and other grammar-based parsers have already been superseded by semisupervised end-to-end training of deep neural networks (DNNs) (Dong and Lapata 2016, 2018; Jia and Liang 2016) , raising the question of whether DNNs will replace structured models for practical applications in general.",
"cite_spans": [
{
"start": 220,
"end": 254,
"text": "(Reddy, Lapata, and Steedman 2014)",
"ref_id": "BIBREF70"
},
{
"start": 408,
"end": 417,
"text": "(Dong and",
"ref_id": "BIBREF18"
},
{
"start": 418,
"end": 436,
"text": "Lapata 2016, 2018;",
"ref_id": "BIBREF18"
},
{
"start": 437,
"end": 456,
"text": "Jia and Liang 2016)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CCG in the Age of Deep Learning",
"sec_num": "6."
},
{
"text": "DNNs are effective because we don't have access to the universal semantic representations that allow the child to induce full CCG for natural languages and that we really ought to be using both in semantic parser induction, and in building knowledge graphs. Because the FreeBase Query language is quite unlike any kind of linguistic logical form, let alone like the universal language of mind, it may well be more practically effective with small and idiosyncratic data sets to induce semantic parsers for them by end-to-end deep neural brute force, rather than by CCG semantic parser induction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CCG in the Age of Deep Learning",
"sec_num": "6."
},
{
"text": "But is it really possible that the problem of parsing could be completely solved by recurrent neural network (RNN)/LSTM, perhaps augmented by attention/a stack (He et al., 2017; Kuncoro et al., 2018) ? Do semantic-parsing-as-end-to-end-translation learners actually learn syntax, as has been claimed?",
"cite_spans": [
{
"start": 160,
"end": 177,
"text": "(He et al., 2017;",
"ref_id": "BIBREF26"
},
{
"start": 178,
"end": 199,
"text": "Kuncoro et al., 2018)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CCG in the Age of Deep Learning",
"sec_num": "6."
},
{
"text": "It seems likely that end-to-end semantic role labeler parsers and neural machine translation will continue to have difficulty with long-range wh-dependencies, because the evidence for their detailed idiosyncrasies is so sparse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CCG in the Age of Deep Learning",
"sec_num": "6."
},
{
"text": "For example, both English and French treat embedded subject extraction as a special case, either involving special bare complement verb categories (English) or a special complementizer \"qui\" (French). a. A woman who I believe (*that) won b. Une femme que je crois qui/*que\u00e0 gagn\u00e9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CCG in the Age of Deep Learning",
"sec_num": "6."
},
{
"text": "(This is actually predicted by CCG, which says that if you are an SVO language with relatively rigid word order, like English and French, then you will not in general be able to extract embedded subjects, whereas in verb-initial languages like Welsh or verb-final languages like Japanese and German, you will either be able to extract both embedded subjects and objects, or neither- [Steedman 1987 [Steedman , 2000 .)",
"cite_spans": [
{
"start": 383,
"end": 397,
"text": "[Steedman 1987",
"ref_id": "BIBREF75"
},
{
"start": 398,
"end": 414,
"text": "[Steedman , 2000",
"ref_id": "BIBREF76"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CCG in the Age of Deep Learning",
"sec_num": "6."
},
{
"text": "Not surprisingly, at the time of writing, a well-known end-to-end DNN translation system Near You shows no sign of having learned these syntactic niceties. Starting from a legal English, we get an ambiguous French sentence whose back-translation to English translation means something different:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CCG in the Age of Deep Learning",
"sec_num": "6."
},
{
"text": "This is the company that the agency told us owned the title. = C'est la compagnie que l'agence nous a dit d\u00e9tenir le titre. = This is the company that the agency told us to hold the title. 13Similarly, if we start with French subject extraction, we obtain ungrammatical English (which happily translates back into the original correct French):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CCG in the Age of Deep Learning",
"sec_num": "6."
},
{
"text": "C'est la compagnie que l'agence nous a dit qui d\u00e9tient le titre. = *This is the company that the agency told us *that holds the title. = C'est la compagnie que l'agence nous a dit qui d\u00e9tient le titre. 14If instead we start with legal English subject extraction, using a bare complement, the (correct) translation again includes an infinitival that is incorrectly back-translated:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CCG in the Age of Deep Learning",
"sec_num": "6."
},
{
"text": "This is the company that they said had owned the bank. = C'est la compagnie qu'ils ont dit avoir poss\u00e9d\u00e9 la banque. = *This is the company they said they owned the bank. 15By contrast, supervised CCG parsers do rather well on embedded subject extraction (Hockenmaier and Steedman 2002a; Clark, Steedman, and Curran 2004) , which is a fairly frequent construction that happens to be rather unambiguously determined by the parsing model. End-to-end methods are similarly unreliable when faced with long-range agreement, even when the source sentence is completely unambiguous in this respect:",
"cite_spans": [
{
"start": 254,
"end": 286,
"text": "(Hockenmaier and Steedman 2002a;",
"ref_id": "BIBREF28"
},
{
"start": 287,
"end": 320,
"text": "Clark, Steedman, and Curran 2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CCG in the Age of Deep Learning",
"sec_num": "6."
},
{
"text": "The banks think that the chairman owns the stock, and know that it is stolen. = Les banques pensent que le pr\u00e9sident est propri\u00e9taire du stock et sait qu'il est vol\u00e9. = Banks think that the president owns the stock and knows it is stolen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CCG in the Age of Deep Learning",
"sec_num": "6."
},
{
"text": "(16) Moreover, in principle at least, these constructions could be learned by grammarbased semantic parser induction from examples, using the methods of Kwiatkowski et al. (2010 Kwiatkowski et al. ( , 2011 and Abend et al. (2017) .",
"cite_spans": [
{
"start": 153,
"end": 177,
"text": "Kwiatkowski et al. (2010",
"ref_id": "BIBREF42"
},
{
"start": 178,
"end": 205,
"text": "Kwiatkowski et al. ( , 2011",
"ref_id": "BIBREF42"
},
{
"start": 210,
"end": 229,
"text": "Abend et al. (2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CCG in the Age of Deep Learning",
"sec_num": "6."
},
{
"text": "These observations make it seem likely that there will be a continued need for structured representations in tasks like QA where long-range dependencies matter. Nevertheless, like rock n'roll, deep learning and distributional representations are clearly here to stay. The future in parsing for such tasks probably lies with hybrid systems using neural front ends for disambiguation, and grammars for assembling meaning representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CCG in the Age of Deep Learning",
"sec_num": "6."
},
{
"text": "The most important open problem in NLP remains the fact that natural language understanding involves inference as well as semantics, and we have no idea of the meaning representation involved. If your question is Has Verizon bought Yahoo?, the text will almost certainly answer it many times over. But it is almost equally certain to present the information in a form that is not immediately compatible with the form of the question. For example, sentences like the following use: a different verb; a noun rather than a verb; an implicative verb; an entailing quantification; a negated entailment; a modal verb; and disjunction: a. Verizon purchased Yahoo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Shape of Things to Come",
"sec_num": "7."
},
{
"text": "(\"Yes\") b. Verizon's purchase of Yahoo (\"Yes\") c. Verizon managed to buy Yahoo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Shape of Things to Come",
"sec_num": "7."
},
{
"text": "(\"Yes\") d. Verizon acquired every company.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Shape of Things to Come",
"sec_num": "7."
},
{
"text": "(\"Yes\") e. Verizon doesn't own Yahoo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Shape of Things to Come",
"sec_num": "7."
},
{
"text": "(\"No\") f. Yahoo may be sold to Verizon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Shape of Things to Come",
"sec_num": "7."
},
{
"text": "(\"Maybe\") g. Verizon will buy Yahoo or Yazoo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Shape of Things to Come",
"sec_num": "7."
},
{
"text": "(\"Maybe not\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Shape of Things to Come",
"sec_num": "7."
},
{
"text": "To arrive at a meaning representation language that is form-independent (and ultimately language-independent), we are using CCG parsers to machine-read the Web for relations between typed named entities. in order to detect consistent patterns of entailment between relations over named entities of the same types, using directional similarity over entity vectors representing relations. We then build an entailment graph (cleaning it up and closing it under relations such as transitivity [cf. Berant et al. 2015] ). Cliques of mutually entailing relations in the entailment graph then constitute paraphrases that can be collapsed to a single relation identifier (Lewis and Steedman 2013a) . (This can be done across text from multiple languages [Lewis and Steedman 2013b] .)",
"cite_spans": [
{
"start": 489,
"end": 513,
"text": "[cf. Berant et al. 2015]",
"ref_id": null
},
{
"start": 663,
"end": 689,
"text": "(Lewis and Steedman 2013a)",
"ref_id": "BIBREF48"
},
{
"start": 746,
"end": 772,
"text": "[Lewis and Steedman 2013b]",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Shape of Things to Come",
"sec_num": "7."
},
{
"text": "We can then replace the original naive semantics for relation expressions with the relevant paraphrase cluster identifiers, and reparse the entire corpus using this now both form-independent and language-independent semantic representation, building an enormous knowledge graph, with the entities as nodes, and the paraphrase identifiers as relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Shape of Things to Come",
"sec_num": "7."
},
{
"text": "To answer questions concerning the knowledge in this graph, we parse questions Q into the same form-independent semantics representation, which is now the language of the knowledge graph itself. To answer the question, we use the knowledge graph and the entailment graph, and the following rule: a. if Q or anything that entails Q is in the knowledge graph, then answer in the positive. b. If \u00acQ or the negation of anything that Q entails is in the knowledge graph, then answer in the negative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Shape of Things to Come",
"sec_num": "7."
},
{
"text": "For this to work, we need complex expressions (including negation, auxiliaries, modals, implicative verbs, etc.) to be nodes in their own right in both knowledge and entailment graphs. We shall then be able to discard hand-built knowledge graphs like Freebase in favor of a truly organic semantic net built in the language of mind, obviating the need to learn end-to-end transduction between semantic representations and the language of the knowledge graph. If this project is successful, the language-independent paraphrase cluster identifiers will perform the function of a \"hidden\" version of the decompositional semantic features in semantic representations like those of Katz and Postal (1964) , Jackendoff (1990) , Moens and Steedman (1988) , White (1994) , and Pustejovsky (1998) , while the entailment graph will form a similarly hidden version of the \"meaning postulates\" of Carnap (1952) and Fodor, Fodor, and Garrett (1975) . Such semantic representations are essentially distributional, but with the advantage that they can be combined with traditional logical operators such as quantifiers and negation.",
"cite_spans": [
{
"start": 676,
"end": 698,
"text": "Katz and Postal (1964)",
"ref_id": "BIBREF35"
},
{
"start": 701,
"end": 718,
"text": "Jackendoff (1990)",
"ref_id": "BIBREF29"
},
{
"start": 721,
"end": 746,
"text": "Moens and Steedman (1988)",
"ref_id": "BIBREF58"
},
{
"start": 749,
"end": 761,
"text": "White (1994)",
"ref_id": "BIBREF86"
},
{
"start": 768,
"end": 786,
"text": "Pustejovsky (1998)",
"ref_id": "BIBREF69"
},
{
"start": 884,
"end": 897,
"text": "Carnap (1952)",
"ref_id": "BIBREF8"
},
{
"start": 902,
"end": 934,
"text": "Fodor, Fodor, and Garrett (1975)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Shape of Things to Come",
"sec_num": "7."
},
{
"text": "This proposal for the discovery of hidden semantic primitives underlying natural language semantics stands in contrast to another quite different contemporary approach to distributional semantics that seeks to use dimensionally reduced vector-based representations of collocations to represent word meanings, using linear-algebraic operations such as vector and tensor addition and multiplication in place of traditional compositional semantics. It is an interesting open question whether vector-based distributional word embeddings can be used, together with directional similarity measures, to build entailment graphs of a similar kind (Henderson and Popa 2016; Chang et al. 2018) . It is quite likely that some kind of hybrid approach will be needed here too, to combat the eternal silence of the infinite spaces of the long tail.",
"cite_spans": [
{
"start": 638,
"end": 663,
"text": "(Henderson and Popa 2016;",
"ref_id": "BIBREF27"
},
{
"start": 664,
"end": 682,
"text": "Chang et al. 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Shape of Things to Come",
"sec_num": "7."
},
{
"text": "Algorithms like LSTM and RNN may work in practice. But do they work in theory? In particular, can they learn all the syntactic stuff in the long tail, like non-constituent coordination, subject extractions, and crossing dependency, in a way that will support semantic interpretation? If they are not actually learning syntax, but are instead learning a huge finite-state transducer or a soft augmented transition network, then by concentrating on them as mechanisms for natural language processing, we are in danger of losing sight of the computational linguistic project of also providing computational explanations of language and mind.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8."
},
{
"text": "Even if we convince ourselves that something like SEQ2TREE is a psychologically real learning mechanism, and that children learn their first language by end-to-end mapping of the sentences of their language onto the situationally afforded structures of the universal language of mind, we still face the supreme challenge of finding out what that universal semantic target language looks like. We will not get an answer to that question unless we can rise above using SQL, SPARQL, and hand-built ontologies and logics such as OWL as proxies for the hidden language of mind, to use machine learning for what it is really good at, namely, finding out such hidden variables and their values, for use in a truly natural language processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8."
},
{
"text": "As well as at Warwick and Edinburgh, much of the early work on CCG was done during 1980-1981 as a visiting fellow at the University of Texas at Austin, under funding from the Sloan Foundation to Stanley Peters and Phil Gough, my first introduction to the United States.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The combinators are identified as B n , T, and S n for historical reasons. This combinatory calculus is distinct from the categorial type-logic stemming from the work ofLambek (1958) that similarly underpins Type-Logical Grammar(Oehrle 1988; Hepple 1990;Morrill 1994;Moortgat 1997), in which the grammar is a logic, rather than a calculus, and some but not all of the CCG combinatory rules are theorems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Dryer (2018) notes four languages that have been claimed to have the first of these orders as basic. However, examination of the source materials makes it clear that the examples in question involve extraposed APs rather than A, which Cinque correctly excludes (cf. Cinque 2010).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This point seems to have escapedBerwick and Chomsky (2016, pages 175-176, n9).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work was supported in part by ERC Advanced Fellowship GA 742137 SEMANTAX; ARC Discovery grant DP160102156; a Google Faculty Award and a Bloomberg L.P. Gift Award; a University of Edinburgh and Huawei Technologies award; my co-investigators Nate Chamber and Mark Johnson; the combinator found, Bonnie Webber; and all my teachers and students over many years.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bootstrapping language acquisition",
"authors": [
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Nathaniel",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2017,
"venue": "Cognition",
"volume": "164",
"issue": "",
"pages": "116--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abend, Omri, Tom Kwiatkowski, Nathaniel Smith, Sharon Goldwater, and Mark Steedman. 2017. Bootstrapping language acquisition. Cognition, 164:116-143.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "On the order of words",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Ades",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 1982,
"venue": "Linguistics and Philosophy",
"volume": "4",
"issue": "",
"pages": "517--558",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ades, Anthony and Mark Steedman. 1982. On the order of words. Linguistics and Philosophy, 4:517-558.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bar-Hillel, Yehoshua. 1960/1964. A demonstration of the nonfeasability of fully automatic machine translation",
"authors": [
{
"first": "Bharat",
"middle": [],
"last": "Ambati",
"suffix": ""
},
{
"first": "Tejaswini",
"middle": [],
"last": "Ram",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Deoskar",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "174--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ambati, Bharat Ram, Tejaswini Deoskar, Mark Johnson, and Mark Steedman. 2015. An incremental algorithm for transition-based CCG parsing. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 53-63, Denver, CO. Bar-Hillel, Yehoshua. 1960/1964. A demonstration of the nonfeasability of fully automatic machine translation. In Yehoshua Bar-Hillel, editor, Language and Information, Addison-Wesley, Reading, MA, pages 174-179.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Efficient global learning of entailment graphs",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Noga",
"middle": [],
"last": "Alon",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "42",
"issue": "",
"pages": "221--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berant, Jonathan, Noga Alon, Ido Dagan, and Jacob Goldberger. 2015. Efficient global learning of entailment graphs. Computational Linguistics, 42:221-263.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Why Only Us: Language and Evolution",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Berwick",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Chomsky",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1027--1035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berwick, Robert and Noam, Chomsky. 2016. Why Only Us: Language and Evolution. MIT Press, Cambridge, MA. Birch, Alexandra and Miles Osborne. 2011. Reordering metrics for mt. In Proceedings of the Association for Computational Linguistics, pages 1027-1035, Portland, OR.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An augmented state transition network analysis procedure",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Bobrow",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Fraser",
"suffix": ""
}
],
"year": 1969,
"venue": "Proceedings of the 1st International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "557--567",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bobrow, Daniel and Bruce Fraser. 1969. An augmented state transition network analysis procedure. In Proceedings of the 1st International Joint Conference on Artificial Intelligence, pages 557-567, Washington, DC.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Structural relationships in children's utterances: Syntactic or semantic",
"authors": [
{
"first": "Melissa",
"middle": [],
"last": "Bowerman",
"suffix": ""
}
],
"year": 1973,
"venue": "",
"volume": "",
"issue": "",
"pages": "197--213",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bowerman, Melissa. 1973. Structural relationships in children's utterances: Syntactic or semantic? In T. Moore, editor, Cognitive Development and the Acquisition of Language. Academic Press, pages 197-213.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Introduction: Grammars as mental representations of language",
"authors": [
{
"first": "Joan",
"middle": [],
"last": "Bresnan",
"suffix": ""
},
{
"first": "Ronald",
"middle": [],
"last": "Kaplan",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "xvii--lii",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bresnan, Joan and Ronald Kaplan. 1982. Introduction: Grammars as mental representations of language. In Joan Bresnan, editor, The Mental Representation of Grammatical Relations, MIT Press, Cambridge, MA, pages xvii-lii.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Meaning postulates",
"authors": [
{
"first": "Rudolf",
"middle": [],
"last": "Carnap",
"suffix": ""
}
],
"year": 1952,
"venue": "Philosophical Studies",
"volume": "3",
"issue": "",
"pages": "222--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carnap, Rudolf. 1952. Meaning postulates. Philosophical Studies, 3:65-73. Reprinted as Carnap, 1956:222-229.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Meaning and Necessity",
"authors": [],
"year": 1956,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carnap, Rudolf, editor. 1956. Meaning and Necessity, 2nd edition, University of Chicago Press, Chicago.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Collaborative effort towards common ground in situated human-robot dialogue",
"authors": [
{
"first": "Joyce",
"middle": [],
"last": "Chai",
"suffix": ""
},
{
"first": "Lanbo",
"middle": [],
"last": "She",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Spencer",
"middle": [],
"last": "Ottarson",
"suffix": ""
},
{
"first": "Cody",
"middle": [],
"last": "Littley",
"suffix": ""
},
{
"first": "Changsong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Hanson",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chai, Joyce, Lanbo She, Rui Fang, Spencer Ottarson, Cody Littley, Changsong Liu, and Kenneth Hanson. 2014. Collaborative effort towards common ground in situated human-robot dialogue. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, pages 33-40, Bielefeld.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Distributional inclusion vector embedding for unsupervised hypernymy detection",
"authors": [
{
"first": "Haw",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Ziyun",
"middle": [],
"last": "Shiuan",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "485--495",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang, Haw Shiuan, Ziyun Wang, Luke Vilnis, and Andrew McCallum. 2018. Distributional inclusion vector embedding for unsupervised hypernymy detection. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 485-495, New Orleans, LA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Syntactic Structures",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1957,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chomsky, Noam. 1957. Syntactic Structures. Mouton, The Hague.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Aspects of the Theory of Syntax",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1965,
"venue": "Structures and Beyond: The Cartography of Syntactic Structures",
"volume": "",
"issue": "",
"pages": "104--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chomsky, Noam. 1965. Aspects of the Theory of Syntax. MIT Press, Cambridge, MA. Chomsky, Noam. 2001/2004. Beyond explanatory adequacy. In Adriana Belletti, editor, Structures and Beyond: The Cartography of Syntactic Structures, Oxford University Press, pages 104-131.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Deriving Greenberg's universal 20 and its exceptions",
"authors": [
{
"first": "Guglielmo",
"middle": [],
"last": "Cinque",
"suffix": ""
}
],
"year": 2005,
"venue": "Linguistic Inquiry",
"volume": "36",
"issue": "",
"pages": "315--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cinque, Guglielmo. 2005. Deriving Greenberg's universal 20 and its exceptions. Linguistic Inquiry, 36:315-332.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The Syntax of Adjectives, Linguistic Inquiry Monograph 57",
"authors": [
{
"first": "Guglielmo",
"middle": [],
"last": "Cinque",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cinque, Guglielmo. 2010. The Syntax of Adjectives, Linguistic Inquiry Monograph 57. MIT Press, Cambridge MA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Object-extraction and question-parsing using CCG",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "111--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clark, Stephen and James R. Curran. 2004. Parsing the WSJ using CCG and log-linear models. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, pages 104-111, Barcelona. Clark, Stephen, Mark Steedman, and James R. Curran. 2004. Object-extraction and question-parsing using CCG. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 111-118, Barcelona.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "On seeing things",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Clowes",
"suffix": ""
}
],
"year": 1971,
"venue": "Artificial Intelligence",
"volume": "2",
"issue": "",
"pages": "79--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clowes, Max. 1971. On seeing things. Artificial Intelligence, 2:79-116.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Coarse-to-fine decoding for neural semantic parsing",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Berlin",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "731--742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong, Li and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 33-43, Berlin. Dong, Li and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. In Proceedings of ACL, pages 731-742, Melbourne. Dryer, Matthew. 2018. On the order of demonstrative, numeral, adjective, and noun. Language, 94:(to appear).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The psychological unreality of semantic representations",
"authors": [
{
"first": "Janet",
"middle": [],
"last": "Fodor",
"suffix": ""
},
{
"first": "Jerry",
"middle": [],
"last": "Fodor",
"suffix": ""
},
{
"first": "Merrill",
"middle": [],
"last": "Garrett",
"suffix": ""
}
],
"year": 1975,
"venue": "Linguistic Inquiry",
"volume": "6",
"issue": "",
"pages": "515--531",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fodor, Janet, Jerry Fodor, and Merrill Garrett. 1975. The psychological unreality of semantic representations. Linguistic Inquiry, 6:515-531.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The Psychology of Language",
"authors": [
{
"first": "Jerry",
"middle": [
"A"
],
"last": "Fodor",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Bever",
"suffix": ""
},
{
"first": "Merrill",
"middle": [],
"last": "Garrett",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fodor, Jerry A., Thomas Bever, and Merrill Garrett. 1974. The Psychology of Language. McGraw-Hill, New York.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A Computer Model of Transformational Grammar",
"authors": [
{
"first": "Joyce",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 1971,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Friedman, Joyce. 1971. A Computer Model of Transformational Grammar. Elsevier, New York.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Unbounded dependencies and coordinate structure",
"authors": [
{
"first": "Gerald",
"middle": [],
"last": "Gazdar",
"suffix": ""
}
],
"year": 1981,
"venue": "Linguistic Inquiry",
"volume": "12",
"issue": "",
"pages": "155--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gazdar, Gerald. 1981. Unbounded dependencies and coordinate structure. Linguistic Inquiry, 12:155-184.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Applicability of indexed grammars to natural languages",
"authors": [
{
"first": "Gerald",
"middle": [],
"last": "Gazdar",
"suffix": ""
}
],
"year": 1988,
"venue": "Natural Language Parsing and Linguistic Theories. Reidel",
"volume": "",
"issue": "",
"pages": "69--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gazdar, Gerald. 1988. Applicability of indexed grammars to natural languages. In Uwe Reyle and Christian Rohrer, editors, Natural Language Parsing and Linguistic Theories. Reidel, Dordrecht, pages 69-94.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A Robust Parser-Interpreter for Jazz Chord Sequences",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Granroth-Wilding",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of New Music Research",
"volume": "43",
"issue": "",
"pages": "355--374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Granroth-Wilding, Mark and Mark Steedman. 2014. A Robust Parser-Interpreter for Jazz Chord Sequences. Journal of New Music Research, 43:355-374.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "On the failure of generative grammar. Language",
"authors": [
{
"first": "Maurice",
"middle": [],
"last": "Gross",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "55",
"issue": "",
"pages": "859--885",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gross, Maurice. 1978. On the failure of generative grammar. Language, 55:859-885.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Deep semantic role labeling: What works and what's next",
"authors": [
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "473--483",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "He, Luheng, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what's next. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 473-483.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The Grammar and Processing of Order and Dependency: A Categorial Approach",
"authors": [
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Popa",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2052--2062",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henderson, James and Diana Popa. 2016. A vector space for distributional semantics for entailment. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2052-2062, Berlin. Hepple, Mark. 1990. The Grammar and Processing of Order and Dependency: A Categorial Approach. Ph.D. thesis, University of Edinburgh.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Generative models for statistical parsing with Combinatory Categorial Grammar",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Third International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "335--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hockenmaier, Julia and Mark Steedman. 2002a. Acquiring compact lexicalized grammars from a cleaner treebank. In Proceedings of the Third International Conference on Language Resources and Evaluation, pages 1974-1981, Las Palmas. Hockenmaier, Julia and Mark Steedman. 2002b. Generative models for statistical parsing with Combinatory Categorial Grammar. In Proceedings of the 40th Meeting of the Association for Computational Linguistics, pages 335-342, Philadelphia, PA.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Semantic Structures",
"authors": [
{
"first": "Ray",
"middle": [],
"last": "Jackendoff",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jackendoff, Ray. 1990. Semantic Structures. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Computation of the probability of initial substring generation by stochastic context-free grammars",
"authors": [
{
"first": "Fred",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 1991,
"venue": "Computational Linguistics",
"volume": "17",
"issue": "",
"pages": "315--323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelinek, Fred and John Lafferty. 1991. Computation of the probability of initial substring generation by stochastic context-free grammars. Computational Linguistics, 17:315-323.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Data recombination for neural semantic parsing",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "12--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jia, Robin and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12-22, Berlin.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Tree-adjoining grammars",
"authors": [
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 1988,
"venue": "Natural Language Parsing",
"volume": "",
"issue": "",
"pages": "206--250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshi, Aravind. 1988. Tree-adjoining grammars. In David Dowty, Lauri Karttunen, and Arnold Zwicky, editors, Natural Language Parsing. Cambridge University Press, Cambridge, pages 206-250.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Phrase structure trees bear more fruit than you would have thought",
"authors": [
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 1982,
"venue": "Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshi, Aravind and Leon Levy. 1982. Phrase structure trees bear more fruit than you would have thought. Computational Linguistics, 8:1-11.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The convergence of mildly context-sensitive formalisms",
"authors": [
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Vijay-Shanker",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
}
],
"year": 1991,
"venue": "Processing of Linguistic Structure",
"volume": "",
"issue": "",
"pages": "31--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshi, Aravind, K. Vijay-Shanker, and David Weir. 1991. The convergence of mildly context-sensitive formalisms. In Peter Sells, Stuart Shieber, and Tom Wasow, editors, Processing of Linguistic Structure. MIT Press, Cambridge, MA, pages 31-81.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "An Integrated Theory of Linguistic Descriptions",
"authors": [
{
"first": "Jerrold",
"middle": [],
"last": "Katz",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Postal",
"suffix": ""
}
],
"year": 1964,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katz, Jerrold and Paul Postal. 1964. An Integrated Theory of Linguistic Descriptions. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Newly-acquired pre-cultural behavior of the natural troop of",
"authors": [
{
"first": "Masao",
"middle": [],
"last": "Kawai",
"suffix": ""
}
],
"year": 1965,
"venue": "Japanese monkeys on Koshima islet. Primates",
"volume": "6",
"issue": "",
"pages": "1--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kawai, Masao. 1965. Newly-acquired pre-cultural behavior of the natural troop of Japanese monkeys on Koshima islet. Primates, 6:1-30.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Functional grammar",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 1979,
"venue": "Proceedings of the 5th Annual Meeting of the Berkeley Linguistics Society",
"volume": "",
"issue": "",
"pages": "142--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kay, Martin. 1979. Functional grammar. In Proceedings of the 5th Annual Meeting of the Berkeley Linguistics Society, pages 142-158, Berkeley, CA.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "The Mentality of Apes. Harcourt Brace and World",
"authors": [
{
"first": "Wolfgang",
"middle": [],
"last": "K\u00f6hler",
"suffix": ""
}
],
"year": 1925,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K\u00f6hler, Wolfgang. 1925. The Mentality of Apes. Harcourt Brace and World, New York.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Joint syntactic and semantic parsing with combinatory categorial grammar",
"authors": [
{
"first": "Jayant",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1188--1198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krishnamurthy, Jayant and Tom Mitchell. 2014. Joint syntactic and semantic parsing with combinatory categorial grammar. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1188-1198, Baltimore, MD.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Lexicalization and generative power in CCG. Computational Linguistics",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Koller",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "41",
"issue": "",
"pages": "187--219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuhlmann, Marco, Alexander Koller, and Giorgio Satta. 2015. Lexicalization and generative power in CCG. Computational Linguistics, 41:187-219.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "LSTMs can learn syntax-sensitive dependencies well, but modeling structure makes them better",
"authors": [
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1426--1436",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuncoro, Adhiguna, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom. 2018. LSTMs can learn syntax-sensitive dependencies well, but modeling structure makes them better. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1426-1436.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Inducing probabilistic CCG grammars from logical form with higher-order unification",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Cambridge",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Tom",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1512--1523",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kwiatkowski, Tom, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1223-1233, Cambridge, MA. Kwiatkowski, Tom, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in CCG grammar induction for semantic parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1512-1523, Edinburgh.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "The mathematics of sentence structure",
"authors": [
{
"first": "Joachim",
"middle": [],
"last": "Lambek",
"suffix": ""
}
],
"year": 1958,
"venue": "American Mathematical Monthly",
"volume": "65",
"issue": "",
"pages": "154--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lambek, Joachim. 1958. The mathematics of sentence structure. American Mathematical Monthly, 65:154-170.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "The problem of serial order in behavior",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Lashley",
"suffix": ""
}
],
"year": 1951,
"venue": "Cerebral Mechanisms in Behavior",
"volume": "",
"issue": "",
"pages": "112--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lashley, Karl. 1951. The problem of serial order in behavior. In L. A. Jeffress, editor, Cerebral Mechanisms in Behavior, Wiley, New York, pages 112-136. Reprinted in Saporta (1961).",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "The Biological Foundations of Language",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Lenneberg",
"suffix": ""
}
],
"year": 1967,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lenneberg, Eric. 1967. The Biological Foundations of Language. John Wiley, New York.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "General semantics. Synth\u00e8se",
"authors": [
{
"first": "David",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 1970,
"venue": "",
"volume": "22",
"issue": "",
"pages": "18--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis, David. 1970. General semantics. Synth\u00e8se, 22:18-67.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "LSTM CCG parsing",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "221--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis, Mike, Kenton Lee, and Luke Zettlemoyer. 2016. LSTM CCG parsing. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 221-231, San Diego, CA.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Combined distributional and logical semantics",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "179--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis, Mike and Mark Steedman. 2013a. Combined distributional and logical semantics. Transactions of the Association for Computational Linguistics, 1:179-192.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Unsupervised induction of cross-lingual semantic relations",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "681--692",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis, Mike and Mark Steedman. 2013b. Unsupervised induction of cross-lingual semantic relations. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 681-692, Seattle, WA.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "A * CCG parsing with a supertag-factored model",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "990--1000",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis, Mike and Mark Steedman. 2014. A * CCG parsing with a supertag-factored model. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 990-1000, Doha.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Artificial intelligence: A general survey",
"authors": [
{
"first": "Sir",
"middle": [],
"last": "Lighthill",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "James",
"suffix": ""
}
],
"year": 1973,
"venue": "Artificial Intelligence: A Paper Symposium",
"volume": "",
"issue": "",
"pages": "3--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lighthill, Sir James. 1973. Artificial intelligence: A general survey. In James Lighthill, Stuart Sutherland, Roger Needham, and Christopher Longuet-Higgins, editors, Artificial Intelligence: A Paper Symposium, Science Research Council, London, pages 3-18.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Linguistic structure and speech shadowing at very short latencies",
"authors": [
{
"first": "William",
"middle": [],
"last": "Marslen-Wilson",
"suffix": ""
}
],
"year": 1973,
"venue": "Nature",
"volume": "244",
"issue": "",
"pages": "522--523",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marslen-Wilson, William. 1973. Linguistic structure and speech shadowing at very short latencies. Nature, 244:522-523.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Combining world and interaction models for human-robot collaborations",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Matuszek",
"suffix": ""
},
{
"first": "Andrzej",
"middle": [],
"last": "Pronobis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Dieter",
"middle": [],
"last": "Fox",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Workshops at the 27th National Conference on Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "15--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matuszek, Cynthia, Andrzej Pronobis, Luke Zettlemoyer, and Dieter Fox. 2013. Combining world and interaction models for human-robot collaborations. In Proceedings of Workshops at the 27th National Conference on Artificial Intelligence (AAAI), pages 15-22, Bellevue, WA.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "HMM-based voice separation of MIDI performance",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mcleod",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of New Music Research",
"volume": "45",
"issue": "",
"pages": "17--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McLeod, Andrew and Mark Steedman. 2016. HMM-based voice separation of MIDI performance. Journal of New Music Research, 45:17-26.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "CCG syntactic reordering models for phrase-based machine translation",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "Mehay",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "210--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehay, Denis and Chris Brew. 2012. CCG syntactic reordering models for phrase-based machine translation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 210-221, Montr\u00e9al.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "The Psychology of Communication",
"authors": [
{
"first": "George",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1967,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, George. 1967. The Psychology of Communication. Basic Books, New York.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Plans and the Structure of Behavior",
"authors": [
{
"first": "George",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Galanter",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Pribram",
"suffix": ""
}
],
"year": 1960,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, George, Eugene Galanter, and Karl Pribram. 1960. Plans and the Structure of Behavior. Holt, New York.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Temporal ontology and temporal reference",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Moens",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 1988,
"venue": "The Language of Time: A Reader",
"volume": "14",
"issue": "",
"pages": "93--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moens, Marc and Mark Steedman. 1988. Temporal ontology and temporal reference. Computational Linguistics, 14:15-28. Reprinted in Inderjeet Mani, James Pustejovsky, and Robert Gaizauskas (eds.), The Language of Time: A Reader. Oxford University Press, pages 93-114.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Universal grammar",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Montague",
"suffix": ""
}
],
"year": 1970,
"venue": "Theoria",
"volume": "36",
"issue": "",
"pages": "222--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Montague, Richard. 1970. Universal grammar. Theoria, 36:373-398. Reprinted as Thomason 1974:222-246.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Categorial type logics",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Moortgat",
"suffix": ""
}
],
"year": 1997,
"venue": "Handbook of Logic and Language",
"volume": "",
"issue": "",
"pages": "93--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moortgat, Michael. 1997. Categorial type logics. In Johan van Benthem and Alice ter Meulen, editors, Handbook of Logic and Language, North Holland, Amsterdam, pages 93-177.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Type-Logical Grammar",
"authors": [
{
"first": "Glyn",
"middle": [],
"last": "Morrill",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morrill, Glyn. 1994. Type-Logical Grammar. Kluwer, Dordrecht.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "The Grammar of Shupamem",
"authors": [
{
"first": "Abdoulaye",
"middle": [],
"last": "Nchare",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Laziz",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nchare, Abdoulaye Laziz. 2012. The Grammar of Shupamem. Ph.D. thesis, New York University.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "A psycholinguistically motivated parser for CCG",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Niv",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "125--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niv, Michael. 1994. A psycholinguistically motivated parser for CCG. In Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, pages 125-132, Las Cruces, NM.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Multidimensional compositional functions as a basis for grammatical analysis",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Oehrle",
"suffix": ""
}
],
"year": 1988,
"venue": "Categorial Grammars and Natural Language Structures. Reidel",
"volume": "",
"issue": "",
"pages": "349--390",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oehrle, Richard. 1988. Multidimensional compositional functions as a basis for grammatical analysis. In Richard Oehrle, Emmon Bach, and Deirdre Wheeler, editors, Categorial Grammars and Natural Language Structures. Reidel, Dordrecht, pages 349-390.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "A lazy way to chart parse with categorial grammars",
"authors": [
{
"first": "Remo",
"middle": [],
"last": "Pareschi",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 1987,
"venue": "Proceedings of the 25th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pareschi, Remo and Mark Steedman. 1987. A lazy way to chart parse with categorial grammars. In Proceedings of the 25th",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Annual Conference of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference of the Association for Computational Linguistics, pages 81-88, Stanford, CA.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Language and Machines: Computers in Translation and Linguistics. National Academy of Sciences",
"authors": [
{
"first": "John",
"middle": [],
"last": "Pierce",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Hamp",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hays",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Hockett",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Oettinger",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Perlis",
"suffix": ""
}
],
"year": 1966,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierce, John, John Carroll, Eric Hamp, David Hays, Charles Hockett, Anthony Oettinger, and Alan Perlis. 1966. Language and Machines: Computers in Translation and Linguistics. National Academy of Sciences, National Research Council, Washington, DC. (ALPAC Report).",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Head Driven Phrase Structure Grammar",
"authors": [
{
"first": "Carl",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Sag",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pollard, Carl and Ivan Sag. 1994. Head Driven Phrase Structure Grammar. CSLI Publications, Stanford, CA.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "The Generative Lexicon",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pustejovsky, James. 1998. The Generative Lexicon. MIT Press.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "Large-scale semantic parsing without question-answer pairs",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "377--392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reddy, Siva, Mirella Lapata, and Mark Steedman. 2014. Large-scale semantic parsing without question-answer pairs. Transactions of the Association for Computational Linguistics, 2:377-392.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Constraints on Variables in Syntax",
"authors": [
{
"first": "John",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Robert",
"suffix": ""
}
],
"year": 1967,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ross, John Robert. 1967. Constraints on Variables in Syntax. Ph.D. thesis, MIT. Published as Ross 1986.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "Psycholinguistics: A Book of Readings",
"authors": [],
"year": 1961,
"venue": "Holt Rineharts",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saporta Sol, editor. 1961. Psycholinguistics: A Book of Readings. Holt Rineharts & Winston, New York.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "Synonymy and Semantic Classification",
"authors": [
{
"first": "Sp\u00e4rck",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1964,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sp\u00e4rck Jones, Karen. 1964/1986. Synonymy and Semantic Classification. Edinburgh University Press. Ph.D. Thesis, Cambridge, 1964.",
"links": null
},
"BIBREF75": {
"ref_id": "b75",
"title": "Combinatory grammars and parasitic gaps",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 1987,
"venue": "Natural Language and Linguistic Theory",
"volume": "5",
"issue": "",
"pages": "403--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steedman, Mark. 1987. Combinatory grammars and parasitic gaps. Natural Language and Linguistic Theory, 5:403-439.",
"links": null
},
"BIBREF76": {
"ref_id": "b76",
"title": "The Syntactic Process",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steedman, Mark. 2000. The Syntactic Process. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF77": {
"ref_id": "b77",
"title": "Plans, affordances, and combinatory grammar",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2002,
"venue": "Linguistics and Philosophy",
"volume": "25",
"issue": "",
"pages": "723--753",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steedman, Mark. 2002. Plans, affordances, and combinatory grammar. Linguistics and Philosophy, 25:723-753.",
"links": null
},
"BIBREF78": {
"ref_id": "b78",
"title": "A formal universal of natural language grammar",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steedman, Mark. 2018. A formal universal of natural language grammar. Submitted.",
"links": null
},
"BIBREF79": {
"ref_id": "b79",
"title": "Combinatory categorial grammar",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2011,
"venue": "Non-Transformational Syntax: A Guide to Current Models",
"volume": "",
"issue": "",
"pages": "181--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steedman, Mark and Jason Baldridge. 2011. Combinatory categorial grammar. In Robert Borsley and Kirsti B\u00f6rjars, editors, Non-Transformational Syntax: A Guide to Current Models. Blackwell, Oxford, pages 181-224.",
"links": null
},
"BIBREF80": {
"ref_id": "b80",
"title": "Bound variables in syntax: Are there any?",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Szabolcsi",
"suffix": ""
}
],
"year": 1989,
"venue": "Semantics and Contextual Expression. Foris",
"volume": "",
"issue": "",
"pages": "295--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Szabolcsi, Anna. 1989. Bound variables in syntax: Are there any? In Renate Bartsch, Johan van Benthem, and Peter van Emde Boas, editors, Semantics and Contextual Expression. Foris, Dordrecht, pages 295-318.",
"links": null
},
"BIBREF81": {
"ref_id": "b81",
"title": "Formal Philosophy: Papers of Richard Montague",
"authors": [
{
"first": "Richmond",
"middle": [],
"last": "Thomason",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomason, Richmond, editor. 1974. Formal Philosophy: Papers of Richard Montague. Yale University Press, New Haven, CT.",
"links": null
},
"BIBREF82": {
"ref_id": "b82",
"title": "The syntactic analysis of English by machine",
"authors": [
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Bratley",
"suffix": ""
},
{
"first": "Hamish",
"middle": [],
"last": "Dewar",
"suffix": ""
}
],
"year": 1968,
"venue": "Machine Intelligence",
"volume": "3",
"issue": "",
"pages": "281--309",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorne, James, Paul Bratley, and Hamish Dewar. 1968. The syntactic analysis of English by machine. In Donald Michie, editor, Machine Intelligence, volume 3. Edinburgh University Press, Edinburgh, pages 281-309.",
"links": null
},
"BIBREF83": {
"ref_id": "b83",
"title": "Characterizing structural descriptions produced by various grammatical formalisms",
"authors": [
{
"first": "K",
"middle": [],
"last": "Vijay-Shanker",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 1987,
"venue": "Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vijay-Shanker, K., David Weir, and Aravind Joshi. 1987. Characterizing structural descriptions produced by various grammatical formalisms. In Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics, pages 104-111, Stanford, CA.",
"links": null
},
"BIBREF84": {
"ref_id": "b84",
"title": "Grammar as a foreign language",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2755--2763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vinyals, Oriol, \u0141ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems, pages 2755-2763, Montreal.",
"links": null
},
"BIBREF85": {
"ref_id": "b85",
"title": "Formal Principles of Language Acquisition",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Wexler",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Culicover",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wexler, Kenneth and Peter Culicover. 1980. Formal Principles of Language Acquisition. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF86": {
"ref_id": "b86",
"title": "A Computational Approach to Aspectual Composition",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "White, Michael. 1994. A Computational Approach to Aspectual Composition. Ph.D. thesis, University of Pennsylvania.",
"links": null
},
"BIBREF87": {
"ref_id": "b87",
"title": "Preference semantics",
"authors": [
{
"first": "Yorick",
"middle": [],
"last": "Wilks",
"suffix": ""
}
],
"year": 1975,
"venue": "Formal Semantics of Natural Language",
"volume": "",
"issue": "",
"pages": "329--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilks, Yorick. 1975. Preference semantics. In Edward Keenan, editor, Formal Semantics of Natural Language. Cambridge University Press, Cambridge, pages 329-348.",
"links": null
},
"BIBREF88": {
"ref_id": "b88",
"title": "Understanding Natural Language",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Winograd",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Winograd, Terry. 1972. Understanding Natural Language. Academic Press, New York.",
"links": null
},
"BIBREF89": {
"ref_id": "b89",
"title": "Transition network grammars for natural language analysis",
"authors": [
{
"first": "William",
"middle": [],
"last": "Woods",
"suffix": ""
}
],
"year": 1970,
"venue": "Communications of the Association for Computing Machinery",
"volume": "18",
"issue": "",
"pages": "264--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Woods, William. 1970. Transition network grammars for natural language analysis. Communications of the Association for Computing Machinery, 18:264-274.",
"links": null
},
"BIBREF90": {
"ref_id": "b90",
"title": "The right tools: Reflections on computation and language",
"authors": [
{
"first": "William",
"middle": [],
"last": "Woods",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "",
"pages": "601--630",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Woods, William. 2010. The right tools: Reflections on computation and language. Computational Linguistics, 36:601-630.",
"links": null
},
"BIBREF91": {
"ref_id": "b91",
"title": "Learning to map sentences to logical form: Structured classification with Probabilistic Categorial Grammars",
"authors": [
{
"first": "William",
"middle": [],
"last": "Woods",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Nash-Webber",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Bolt",
"suffix": ""
},
{
"first": "Beranek",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Newman",
"middle": [],
"last": "Inc",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Cambridge",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Wenduan",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1972,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "658--666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Woods, William, Ron Kaplan, and Bonnie Nash-Webber. 1972. The lunar sciences natural language information system: Final report. Technical Report 2378, Bolt, Beranek, and Newman Inc, Cambridge, MA. Xu, Wenduan, Stephen Clark, and Yue Zhang. 2014. Shift-reduce CCG parsing with a dependency model. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 218-227, Baltimore, MD. Zettlemoyer, Luke and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with Probabilistic Categorial Grammars. In Proceedings of the 21st Conference on Uncertainty in AI (UAI), pages 658-666, Edinburgh.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Figure 1 (a) K\u00f6hler 1925; (b) Kawai 1965.",
"type_str": "figure",
"num": null
},
"TABREF0": {
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null,
"text": "Give [Anna books] ? and [Manny records] ? b. (Mother to child): There's a DOGGIE! [You LIKE] ? # the doggie. c. Food that you must [wash VP/NP [before eating] (VP\\VP)/NP ] ? . d. ik denk dat ik 1 Henk 2 Cecilia 3 [zag 1 leren 2 zingen 3 ] ?"
}
}
}
}