{ "paper_id": "P89-1018", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:14:37.747611Z" }, "title": "The Structure of Shared Forests in Ambiguous Parsing", "authors": [ { "first": "Sylvie", "middle": [], "last": "Billot", "suffix": "", "affiliation": { "laboratory": "", "institution": "INRIA rand Universit~ d'Orl~ans", "location": {} }, "email": "" }, { "first": "Bernard", "middle": [], "last": "Lang", "suffix": "", "affiliation": { "laboratory": "", "institution": "INRIA rand Universit~ d'Orl~ans", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The Context-Free backbone of some natural language analyzers produces all possible CF parses as some kind of shared forest, from which a single tree is to be chosen by a disambiguation process that may be based on the finer features of the language. We study the structure of these forests with respect to optimality of sharing, and in relation with the parsing schema used to produce them. In addition to a theoretical and experimental framework for studying these issues, the main results presented are:-sophistication in chart parsing schemata (e.g. use of look-ahead) may reduce time and space efficiency instead of improving it,-there is a shared forest structure with at most cubic size for any CF grammar,-when O(n 3) complexity is required, the shape of a shared forest is dependent on the parsing schema used. Though analyzed on CF grammars for simplicity, these results extend to more complex formalisms such as unification based grammars.", "pdf_parse": { "paper_id": "P89-1018", "_pdf_hash": "", "abstract": [ { "text": "The Context-Free backbone of some natural language analyzers produces all possible CF parses as some kind of shared forest, from which a single tree is to be chosen by a disambiguation process that may be based on the finer features of the language. We study the structure of these forests with respect to optimality of sharing, and in relation with the parsing schema used to produce them. In addition to a theoretical and experimental framework for studying these issues, the main results presented are:-sophistication in chart parsing schemata (e.g. use of look-ahead) may reduce time and space efficiency instead of improving it,-there is a shared forest structure with at most cubic size for any CF grammar,-when O(n 3) complexity is required, the shape of a shared forest is dependent on the parsing schema used. Though analyzed on CF grammars for simplicity, these results extend to more complex formalisms such as unification based grammars.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Several natural language parser start with & pure Conte~zt. Free (CF) backbone that makes a first sketch of the structure of the analyzed sentence, before it is handed to a more elaborate analyzer (possibly a coroutine), that takes into account the finer grammatical structure to filter out undesirable parses (see for example [24, 28] ). In [28] , Shieber surveys existing variants to this approach before giving his own tunable approach based on restrictions that ~ split up the infinite nonterminal domain into a finite set of equivalence classes that can be used for parsing\". The basic motivation for this approach is to benefit from the CF parsing technology whose development over 30 years has lead to powerful and ei~cient parsers [I,7] . A parser that takes into account only an approximation of the grammatical features will often find ambiguities it cannot resolve in the analyzed sentences I. A natural solution *Address: INRIA, B.P. 105, 78153 Le Chesn~y, France. The work reported here was partially supported by the Eureka Software Factory project.", "cite_spans": [ { "start": 65, "end": 69, "text": "(CF)", "ref_id": null }, { "start": 327, "end": 331, "text": "[24,", "ref_id": "BIBREF25" }, { "start": 332, "end": 335, "text": "28]", "ref_id": "BIBREF29" }, { "start": 342, "end": 346, "text": "[28]", "ref_id": "BIBREF29" }, { "start": 739, "end": 744, "text": "[I,7]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1 Ambiguity may also have a semantical origin.\" is then to produce all possible parses, according to the CF backbone, and then select among them on the basis of the complete features information. One hitch is that the number of parses may be exponential in the size of the input sentence, or even infuite for cyclic grammars or incomplete sentences [16] . However chart parsing techniques have been developed that produce an encoding of all possible parses as a data structure with a size polynomial in the length of the input sentence. These techniques are all based on a dynamic programming paradigm.", "cite_spans": [ { "start": 349, "end": 353, "text": "[16]", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The kind of structure they produce to represent all parses of the analyzed sentence is an essential characteristic of these algorithm. Some of the published algorithms produce only a chart as described by Kay in [14] , which only associates nonterminal categories to segments of the analyzed sentence [11, 39, 13, 3, 9] , and which thus still requires non-trivial proceasing to extract parse-trees [26] . The worst size complexity of such a chart is only a square function of the size of the input 2.", "cite_spans": [ { "start": 212, "end": 216, "text": "[14]", "ref_id": "BIBREF14" }, { "start": 301, "end": 305, "text": "[11,", "ref_id": "BIBREF11" }, { "start": 306, "end": 309, "text": "39,", "ref_id": "BIBREF40" }, { "start": 310, "end": 313, "text": "13,", "ref_id": "BIBREF13" }, { "start": 314, "end": 316, "text": "3,", "ref_id": null }, { "start": 317, "end": 319, "text": "9]", "ref_id": null }, { "start": 398, "end": 402, "text": "[26]", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, practical parsing algorithms will often produce a more complex structure that explicitly relates the instances of nonterminals associated with sentence fragments to their constituents, possibly in several ways in case of ambiguity, with a sharing of some common subtrees between the distinct ambiguous parses [7, 4, 24, 31, 25] ~ One advantage of this structure is that the chart retains only these constituents that can actually participate in a parse. Furthermore it makes the extraction of parse-trees a trivial matter. A drawback is that this structure may be cubic in the length of the parsed sentence, and more generally polynomial' for some proposed algorithms [31] . However, these algorithms are rather well behaved in practice, and this complexity is not a problem.", "cite_spans": [ { "start": 318, "end": 321, "text": "[7,", "ref_id": "BIBREF6" }, { "start": 322, "end": 324, "text": "4,", "ref_id": "BIBREF3" }, { "start": 325, "end": 328, "text": "24,", "ref_id": "BIBREF25" }, { "start": 329, "end": 332, "text": "31,", "ref_id": "BIBREF32" }, { "start": 333, "end": 336, "text": "25]", "ref_id": "BIBREF26" }, { "start": 677, "end": 681, "text": "[31]", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we shall call shared forests such data struc-2 We do not consider CF reco~zers that have asymptotically the lowest complexity, but are only of theoretical interest here [~S,5] .", "cite_spans": [ { "start": 183, "end": 189, "text": "[~S,5]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3 There are several other published implementation of chart parsers [23, 20, 33] , hut they often do not give much detail on the output of the parsing process, or even side-step the problem ~1.", "cite_spans": [ { "start": 68, "end": 72, "text": "[23,", "ref_id": "BIBREF24" }, { "start": 73, "end": 76, "text": "20,", "ref_id": "BIBREF21" }, { "start": 77, "end": 80, "text": "33]", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "together [33] . We do not consider here the well .formed s~bs~ring fablea of Shell [26] which falls somewhere in between in our classificgtlon. They do not use pointers and parse-trees are only \"indirectly\" visible, but may be extracted rather simply in linear time.", "cite_spans": [ { "start": 9, "end": 13, "text": "[33]", "ref_id": "BIBREF34" }, { "start": 83, "end": 87, "text": "[26]", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 The table may contain useless constituents. tures used to represent simultaneously all parse trees for a given sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Several question\u2022 may be asked in relation with shared forests:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 How to construct them during the parsing process?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Can the cubic complexity be attained without modifying the grammar (e.g. into Chomsky Normal Form)?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "s What is the appropriate data structure to improve sharing and reduce time and space complexity?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 How good is the sharing of tree fragments between ambiguous parses, and how can it be improved?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Is there a relation between the coding of parse-trees in the shared forest and the parsing schema used?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 How well formalized is their definition snd construction?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These questions are of importance in practical systems because the answers impact both the performance and the implementation techniques. For example good sharing may allow a better factorization of the computation that filters parse trees with the secondary features of the language. The representation needed for good sharing or low space complexity may be incompatible with the needs of other components of the system. These components may also make assumptions about this representation that are incompatible with some parsing schemata. The issue of formalization is of course related to the formal tractability of correctness proof for algorithms using shared forests.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In section 2 we describe a uniform theoretical framework in which various parsing strategies are expressed and compared with respect to the above questions. This approach has been implemented into a system intended for the experimental study and comparison of parsing strategies. This system is described in section 3. Section 4 contain~ a detailed example produced with our implementation which illustrates both the working of the system and the underlying theory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To discus\u2022 the above issue\u2022 in a uniform way, we need a genera] framework that encompasses all forms of chart parsing and shared forest building in a unique formalism. We shall take a\u2022 a l~sk a formalism developed by the second author in previous papers [15, 16] . The idea of this approach is to separate the dynamic programming construct\u2022 needed for efficient chart parsing from the chosen parsing schema. Comparison between the classifications of Kay [14] and Gritfith & Petrick [10] shows that a parsing schema (or parsing strategy) may be expressed in the construction of a Push-Down Transducer (PDT), a well studied formalization of left-toright CF parsers 5. These PDTs are usually non-deterministic and cannot be used as produced for actual parsing. Their backtrack simulation does not alway\u2022 terminate, and is often time-exponential when it does, while breadth-first simulation is usually exponential for both time and space. However ity. This approach may thus be used as a uniform framework for comparing chart parsers s.", "cite_spans": [ { "start": 254, "end": 258, "text": "[15,", "ref_id": "BIBREF15" }, { "start": 259, "end": 262, "text": "16]", "ref_id": "BIBREF16" }, { "start": 454, "end": 458, "text": "[14]", "ref_id": "BIBREF14" }, { "start": 482, "end": 486, "text": "[10]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "A Uniform Framework", "sec_num": "2" }, { "text": "The algorithm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.1", "sec_num": null }, { "text": "The following is a formal overview of parsing by dynamic programming interpretation of PDT\u2022. Our ahn is to parse sentences in the language \u00a3(G) generated by a CF phrase structure grammar G --(V, ~, H, N) according to its syntax. The notation used is V for the set of nontermln~l, ~ for the set of terminals, H for the rules, for the initial nonterminal, and e for the empty string. We assume that, by some appropriate parser construction technique (e.g. [12, 6, 1] ) we mechanically produce from the grammar G a parser for the language \u00a3(G) in the form of a (possibly non-deterministic) push.down transducer (PDT) T G. The output of each possible computation of the parser is a sequence of rules in rl ~ to be used in a left-to-right reduction of the input sentence (this is obviously equivalent to producing a parse-tree).", "cite_spans": [ { "start": 454, "end": 458, "text": "[12,", "ref_id": "BIBREF12" }, { "start": 459, "end": 461, "text": "6,", "ref_id": "BIBREF5" }, { "start": 462, "end": 464, "text": "1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "2.1", "sec_num": null }, { "text": "We assume for the PDT T G a very general formal definition that can fit most usual PDT construction techniques. It is defined as an 8-tuple T G --(Q, ]~, A, H, 6, ~, ;, F) where: Q is the set of states, ~ is the set of input word symbols, A is the set of stack symbols, H is the set of output symbols s (i.e. rule\u2022 of G), q is the initial state, $ is the initial stack symbol, F is the set of final states, 6 is a fnite set of transitions of the form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.1", "sec_num": null }, { "text": "(p A a ~-* q B u) with p, q E Q,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.1", "sec_num": null }, { "text": "x,s \u00a2 A u {e}, a E ~: u {~}, and . ~ H*.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.1", "sec_num": null }, { "text": "Let the PDT be in a configuration p --(p Aa az u) where p is the current state, Aa is the \u2022tack contents with A on the top, az is the remaining input where the symbol a is the next to be shifted and z E ~*, and u is the already produced output. The application of a transition r = (p A a ~-* qB v) result\u2022 in a new configuration p' ----(q Bot z uv) where the terminal symbol a has been scanned (i.e. shifted), A has been popped and B has been pushed, and t, has been concatenated to the existing output ,~ If the terminal symbol a is replaced by e in the transition, no input symbol is scanned. If A (reap. B) is replaced by \u2022 then no stack symbol is popped from (resp. pushed on) the \u2022tack.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.1", "sec_num": null }, { "text": "Our algorithm consist\u2022 in an Earley-like 9 simulation of the PDT T G. Using the terminology of [1] , the algorithm builds an item set ,~ successively for each word symbol z~ holding position i in the input sentence z. An item is constituted of two modes of the form (p A i) where p is a PDT state, A is a stack symbol, and i.is the index of an input symbol. The item set & contains items of the form ((p A i) (q B j)) . These item\u2022 are used as nontermineds of an output grammar S The original intent of [15] was to show how one can generate efficient general CF chart parsers, by first producing the PDT with the efllcient techniques for deterministic parsing developed for the compiler technology [6, 12, 1] . This idea was later successfu/ly used by Tomits [31] who applied it to LR(1) parsers [6, 1] , and later to other puelulown based parsers [32] .", "cite_spans": [ { "start": 95, "end": 98, "text": "[1]", "ref_id": "BIBREF0" }, { "start": 503, "end": 507, "text": "[15]", "ref_id": "BIBREF15" }, { "start": 698, "end": 701, "text": "[6,", "ref_id": "BIBREF5" }, { "start": 702, "end": 705, "text": "12,", "ref_id": "BIBREF12" }, { "start": 706, "end": 708, "text": "1]", "ref_id": "BIBREF0" }, { "start": 759, "end": 763, "text": "[31]", "ref_id": "BIBREF32" }, { "start": 796, "end": 799, "text": "[6,", "ref_id": "BIBREF5" }, { "start": 800, "end": 802, "text": "1]", "ref_id": "BIBREF0" }, { "start": 848, "end": 852, "text": "[32]", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "2.1", "sec_num": null }, { "text": "7 Implomczxtations usually dc~ote these rules by their index in the set rl.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.1", "sec_num": null }, { "text": "s Actual implementations use output symbols from rIu~, since rules alone do not distinguish words in the same lexical category.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.1", "sec_num": null }, { "text": "s We asmune the reader to be familiar with some variation of Earley's algorithm. Earley's original paper uses the word stere (from dynamic programming terminology) instead of item. = (8, l'I, ~, U~), where 8 is the set of all items (i.e. the union of &), and the rules in ~ are constructed together with their left-hand-side item by the algorithm. The initial nonterminal Ut of ~ derives on the last items produced by a successful computation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.1", "sec_num": null }, { "text": "Appendix A gives the details of the construction of items and rules in G by interpretation of the transitions of the PDT. More details may be found in [15, 16] .", "cite_spans": [ { "start": 151, "end": 155, "text": "[15,", "ref_id": "BIBREF15" }, { "start": 156, "end": 159, "text": "16]", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "2.1", "sec_num": null }, { "text": "The shared forest An apparently major difference between the above algorithm and other parsers is that it represents a parse as the string of the grammar rules used in a leftmost reduction of the parsed sentence, rather than as a parse tree (cf. section 4). When the sentence has several distinct paxses, the set of all possible parse strings is represented in finite shared form by a CF grammar that generates that possibly infinite set. Other published algorithms produce instead a graph structure representing all paxse-trees with sharing of common subpaxts, which corresponds well to the intuitive notion of a shared forest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.2", "sec_num": null }, { "text": "This difference is only appearance. We show here in section 4 that the CF grammar of all leftmost parses is just a theoretical formalization of the shared.forest graph. Context-Free grammars can be represented by AND-OR graphs that are closely related to the syntax diagrams often used to describe the syntax of programming languages [37] , and to the transition networks of Woods [22] . In the case of our grammar of leftmost parses, this AND-OR graph (which is acyclic when there is only finite ambiguity) is precisely the shaxedforest graph. In this graph, AND-nodes correspond to the usual parse-tree nodes, whil~ OR-nodes correspond to xmbiguities, i.e. distinct possible subtrees occurring in the same context. Sharing ofsubtrees in represented by nodes accessed by more than one other node.", "cite_spans": [ { "start": 334, "end": 338, "text": "[37]", "ref_id": "BIBREF38" }, { "start": 381, "end": 385, "text": "[22]", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "2.2", "sec_num": null }, { "text": "The grammar viewpoint is the following (cf. the example in section 4). Non-terminal (reap. terminal) symbols correspond to nodes with (reap. without) outgoing arcs. ANDnodes correspond to right-hand sides of grammar rules, and OR-nodes (i.e. ambiguities) correspond to non-terminals defined by several rules. Subtree sharing is represented by seVo eral uses of the same symbol in rule right-hand sides.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.2", "sec_num": null }, { "text": "To our knowledge, this representation of parse-forests as grammars is the simplest and most tractable theoretical formalization proposed so far, and the parser presented here is the only one for which the correctness of the output grammar --i.e. of the shared-forest --has ever been proved. Though in the examples we use graph(ical) representations for intuitive understanding (grammars axe also sometimes represented as graphs [37] ), they are not the proper formal tool for manipulating shared forests, and developing formalized (proved) algorithms that use them. Graph formalization is considerably more complex and awkward to manipulate than the well understood, specialized and few concepts of CF grammars. Furthermore, unlike graphs, this grammar formalization of the shared forest may be tractably extended to other grammatical formalisms (ct: section 5).", "cite_spans": [ { "start": 428, "end": 432, "text": "[37]", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "2.2", "sec_num": null }, { "text": "More importantly, our work on the parsing of incomplete sentences [16] has exhibited the fundamental character of our grammatical view of shared forests: when parsing the completely unknown sentence, the shared forest obtained is precisely the complete grammar of the analyzed language. This also leads to connections with the work on partial evalnation [8] .", "cite_spans": [ { "start": 66, "end": 70, "text": "[16]", "ref_id": "BIBREF16" }, { "start": 354, "end": 357, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "2.2", "sec_num": null }, { "text": "The shape of the forest For our shared-forest, x cubic space complexity (in the worst case --space complexity is often linear in practice) is achieved, without requiring that the language grammar be in Chonmky Normal Form, by producing a grammar of parses that has at most two symbols on the right-hand side of its rules. This amounts to representing the list of sons of a parse tree node as a Lisp-like list built with binary nodes (see figures 1 L-2), and it allows partial sharing of the sons i0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.3", "sec_num": null }, { "text": "The structure of the parse grammar, i.e. the shape of the parse forest, is tightly related to the parsing schema used, hence to the structure of the possible computation of the non-deterministic PDT from which the parser is constructed. First we need a precise characterization of parsing strategies, whose distinction is often blurred by superimposed optimizations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.3", "sec_num": null }, { "text": "We call bottom-up a strategy in which the PDT decides on the nature of a constituent (i.e. on the grammar rule that structures it), after having made this decision first on its subconstituents. It corresponds to a postfix left-toright walk of the parse tree. Top-Down parsing recognizes a constituent before recognition of its subconstituents, and corresponds to a prefix walk. Intermediate strategies are also possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.3", "sec_num": null }, { "text": "The sequence of operations of a bottom-up parser is basically of the following form (up to possible simplifying oi>. timizations): To parse a constituent A, the parser first parses and pushes on the stack each sub-constituent B~; at some point, it decides that it has all the constituents of A on the stack and it pops them all, and then it pushes A and outputs the (rule number ~-of the) recognized rule f : A -* Bl ... Bn,. Dynamic programming interpretation of such a sequence results in a shared forest containing parsetrees with the shape described in figure 1, i.e. where each node of the forest points to the beginning of the llst of its sons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.3", "sec_num": null }, { "text": "A top-down PDT uses a different sequence of operations, detailed in appendix B, resulting in the shape of figure 2 where a forest node points to the end of the list of sons, which is itself chained backward. These two figures are only simple examples. Many variations on the shape of parse trees and forests may be obtained by changing the parsing schema.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.3", "sec_num": null }, { "text": "Sharing in the shared forest may correspond to sharing of a complete subtree, but also to sharing of a tail of a llst of sons: this is what allows the cubic complezity. Thus bottomup parsing may share only the rightmost subconstituents of a constituent, while top-down parsing may share only the leftmost subconstituents. This relation between parsing schema and shape of the shared forest (and type of sharing) is a consequence of intrinsic properties of chart parsing, and not of our specific implementation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.3", "sec_num": null }, { "text": "It is for example to be expected that the bidirectional nature of island parsing leads to irregular structure in shared forests, when optimal sharing is sought for.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.3", "sec_num": null }, { "text": "The ideas presented above have been implemented in an experimental system called Tin (after the woodman of OZ). The intent is to provide a uniform f~amework for the construction and experimentation of chart parsers, somewhat as systems like MCHART [29] , but with a more systematic theoretical foundation. The kernel of the system is a virtual parsing machine with a stack and a set of primitive commands corresponding essentially to the operation of a practical Push-Down Transducer. These commands include for example: push (resp. pop) to push a symbol on the stack (reap. pop one), check~indow to compare the look-ahead symbol(s) to some given symbol, chsckstack to branch depending on the top of the sta~k, scan to read an input word, outpu$ to output a rule number (or a terminal symbol), goto for unconditional jumps, and a few others. However theae commands are never used directly to program parsers. They are used as machine instructions for compilers that compile grammatical definitions into Tin code according to some parsing schema. A characteristic of these commands is that they may all be marked as non-determlnistic. The intuitive interpretation is that there is a non-deterministic choice between a command thus marked and another command whose address in the virtual machine code is then specified. However execution of the virtual machine code is done by an all-paths interpreter that follows the dynamic programming strategy described in section 2.1 and appendix A.", "cite_spans": [ { "start": 248, "end": 252, "text": "[29]", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation and Experimental Results", "sec_num": "3" }, { "text": "The Tin interpreter is used in two different ways:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation and Experimental Results", "sec_num": "3" }, { "text": "1. to study the effectiveness for chart parsing of known parsing schemata designed for deterministic parsing. We have only considered formally defined parsing schemata, corresponding to established PDA construction techniques that we use to mechanically translate CF grammars into Tin code. (e.g. LALR(1) and LALR(2) [6] , weak precedence [12] , LL(0) top-down (recursive descent), LR(0), LR(1) [1] ...). 2. to study the computational behavior of the generated code, and the optimization techniques that could be used on the Tin code --and more generally chart parser code --with respect to code size, execution speed and better sharing in the parse forest. Experimenting with several compilation schemata has shown that sophistication may have a negative effect on the ej~iciency of all-path parsin911 . Sophisticated PDT construction techniques tend to multiply the number of special cases, thereby increasing the code size of the chart parser. Sometimes it also prevents sharing of locally identical subcomputations because of differences in context analysis. This in turn may result in lesser sharing in the parse forest and sometimes longer computation, as in example $BBL in appendix C, but of course it does not change the set of parsetrees encoded in the forest 12. Experimentally, weak precedence gives slightly better sharing than LALR(1) parsing. The latter is often v/ewed as more efficient, whereas it only has a larger deterministic domain.", "cite_spans": [ { "start": 317, "end": 320, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 339, "end": 343, "text": "[12]", "ref_id": "BIBREF12" }, { "start": 395, "end": 398, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation and Experimental Results", "sec_num": "3" }, { "text": "One essential guideline to achieve better sharing (and often also reduced computation time) is to try to recognize every grammar rule in only one place of the generated chart parser code, even at the cost of increasing non-determinism.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation and Experimental Results", "sec_num": "3" }, { "text": "Thus simpler schemata such as precedence, LL(0) (and probably LR(0) I~) produce the best sharing. However, since they correspond to a smaller deterministic domain within the CF grammar realm, they may sometimes be computationally less efficient because they produce a larger number of useless items (Le. edges) that correspond to dead-end computational paths.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation and Experimental Results", "sec_num": "3" }, { "text": "Slight sophistication (e.g. LALR(1) used by Tomita in [31] , or LR(1) ) may slightly improve computational performance by detecting earlier dead-end computations. This may however be at the expense of the forest sharing quality.", "cite_spans": [ { "start": 54, "end": 58, "text": "[31]", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation and Experimental Results", "sec_num": "3" }, { "text": "More sophistication (say LR(2)) is usually losing on both accounts as explained earlier. The duplication of computational pgths due to distinct context analysis overweights the 11 We mean here the sophistication of the CF parser construction technique rather than the sophistication of the language features chopin to be used by this parser.", "cite_spans": [ { "start": 177, "end": 179, "text": "11", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation and Experimental Results", "sec_num": "3" }, { "text": "l~ This negative behavior of some techniques originally intended to preserve determlni~n had beam remarked and analyzed in a special case by Bouckaert, Pirotte and Shelling [3] . However we believe their result to be weaker than ours, since it seems to rely on the fact that they directly interpret ~'anuuars rather than first compile them. Hence each interpretive step include in some sense compilation steps, which are more expensive when look-ahead is increased. Their paper presents several examples that run less efficiently when look-ahead is increased. For all these examples, this behavior disappears in our compiled setting. However the grammar SBBL in appendix C shows a loss of eltlciency with increased look-ahead that is due exclusively to loss of sharing caused by irrelevant contextual distinctions. This effect is particularly visible when parsing incomplete sentences [16] .", "cite_spans": [ { "start": 173, "end": 176, "text": "[3]", "ref_id": null }, { "start": 885, "end": 889, "text": "[16]", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation and Experimental Results", "sec_num": "3" }, { "text": "Eiticiency loss with increased look-ahead is mainly due to state splitting [6] . This should favor LALR techniques ova-LR ones.", "cite_spans": [ { "start": 75, "end": 78, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation and Experimental Results", "sec_num": "3" }, { "text": "is Our resnlts do not take into account a newly found optimization of PDT interpretation that applies to all and only to bottomup PDTs. This should make simple bottom-up schemes competitive for sharing quality, and even increase their computational ei~ciency. However it should not change qualitatively the relative performances of bottom-up parsers, and n~y emphasize even more the phenomenon that reduces efficiency when look-ahead in-benefits of early elimination of dead-end paths. But there can be no absolute rule: ff a grammar is aclose\" to the LR(2) domain, an LR(2) schema is likely to give the best result for most parsed sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation and Experimental Results", "sec_num": "3" }, { "text": "Sophisticated schemata correspond also to larger parsers, which may be critical in some natural language applications with very large grammars.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation and Experimental Results", "sec_num": "3" }, { "text": "The choice of a parsing schema depends in fine on the grammar used, on the corpus (or kind) of sentences to be analyzed, and on a balance between computational and sharing efficiency. It is best decided on an experimental basis with a system such as ours. Furthermore, we do not believe that any firm conclusion limited to CF grammars would be of real practical usefulness. The real purpose of the work presented is to get a qualitative insight in phenomena which are best exhibited in the simpler framework of CF parsing. This insight should help us with more complex formalisms (cf. section 5) for which the phenomena might be less easily evidenced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation and Experimental Results", "sec_num": "3" }, { "text": "Note that the evidence gained contradicts the common belid that parsing schemata with a large deterministic domain (see for example the remarks on LR parsing in [31] ) are more effective than simpler ones. Most experiments in this area were based on incomparable implementations, while our uniform framework gives us a common theoretical yardstick.", "cite_spans": [ { "start": 161, "end": 165, "text": "[31]", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation and Experimental Results", "sec_num": "3" }, { "text": "The following is a simple example based on a bottom-up PDT generated by our LALR(1) compiler from the following grammar taken from [31] : The sample input is a(n v det n prep n) ~. It figures (for example) the sentence: aT see a man at home ~.", "cite_spans": [ { "start": 131, "end": 135, "text": "[31]", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "A Simple Bottom-Up Example", "sec_num": "4" }, { "text": "The grammar of parses of the input sentence is given in figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 64, "text": "figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Output grammar produced by the parser", "sec_num": "4.1" }, { "text": "The initial nonterminal is the left-hand side of the first rule. For readability, the nonterminals have been given computer generated names of the form at2, where z is an integer. All other symbols are terminal. Integer terminals correspond to rule numbers of the input language grammar given above, and the other terminals are symbols of the parsed language, except fo r the special terminal %i1\" which indicates the end of the list of subconstituents of a sentence constituent, and may also be read as the empty string ~. Note the ambiguity for nontermlnal at4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output grammar produced by the parser", "sec_num": "4.1" }, { "text": "It is possible to simplify this grammar to 7 rules without losing the sharing of common subparses. However it would no longer exhibit the structure that makes it readable as a shared-forest (though this structure could be retrieved). Here again the two $ symbols must be read as delimiters. The ~1\" symbols, no longer useful, have been omitted in these two parses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output grammar produced by the parser", "sec_num": "4.1" }, { "text": "To explain the structure of the shared forest, we first build a graph from the grammar, as shown in figure 4. Each node corresponds to one terminal or nonterminal of the grammar in figure 3, and is labelled by it. The labels at the right of small dashes are rule numbers from the parsed language grammar (see beginning of section 4). The basic structure is that of figure 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse shared-forest constructed fi'om that grnnalxlar", "sec_num": "4.2" }, { "text": "From this first graph, we can trivially derive the more traditional shared forest given in figure 5. Note that this simplified representation is not always adequate since it does not allow partial sharing of their sons between two nodes. Each node includes a label which is a non-terminal of the parsed language grammar, and for each possible derivation (several in case of ambiguity) there is the number of the grammar rule used for that derivation. Though this simplified version is more readable, the representation of figure 5 is not adequate to represent partial sharing of the subconstituents of a constituent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse shared-forest constructed fi'om that grnnalxlar", "sec_num": "4.2" }, { "text": "Of course, the ~constructions ~ given in this section are purely virtual. In an implementation, the data-structure representing the grammar of figure 3 may be directly interpreted and used as a shared-forest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse shared-forest constructed fi'om that grnnalxlar", "sec_num": "4.2" }, { "text": "A similar construction for top-down parsing is sketched in appendix B. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parse shared-forest constructed fi'om that grnnalxlar", "sec_num": "4.2" }, { "text": "As indicated earlier, our intent is mostly to understand phenomena that would be harder to evidence in more complex grammatical formalisms. This statement implies that our approach can be extended. This is indeed the case. It is known that many simple parsing schemata can be expressed with stack based machines [32] \u2022 This is certainly the case for M! left-to-right CF chart parsing schemata.", "cite_spans": [ { "start": 312, "end": 316, "text": "[32]", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Extensions", "sec_num": "5" }, { "text": "We have formally extended the concept of PDA into that of Logical PDA which is an operational push-down stack device for parsing unification based grammars [17, 18] or other non-CF grammars such as Tree Adjoining Grammars [19] . Hence we axe reusing and developing our theoretical [18] and experimental [38] approach in this much more general setting which is more likely to be effectively usable for natural language parsing.", "cite_spans": [ { "start": 156, "end": 160, "text": "[17,", "ref_id": "BIBREF17" }, { "start": 161, "end": 164, "text": "18]", "ref_id": "BIBREF19" }, { "start": 222, "end": 226, "text": "[19]", "ref_id": "BIBREF20" }, { "start": 281, "end": 285, "text": "[18]", "ref_id": "BIBREF19" }, { "start": 303, "end": 307, "text": "[38]", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Extensions", "sec_num": "5" }, { "text": "Furthermore, these extensions can also express, within the PDA model, non-left-to-fight behavior such as is used in island parsing [38] or in Shei]'s approach [26] \u2022 More generally they allow the formal analysis of agenda strategies, which we have not considered here. In these extensions, the counterpart of parse forests are proof forests of definite clause programs.", "cite_spans": [ { "start": 131, "end": 135, "text": "[38]", "ref_id": "BIBREF39" }, { "start": 159, "end": 163, "text": "[26]", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Extensions", "sec_num": "5" }, { "text": "AnMysis of Ml-path parsing schemata within a common framework exhibits in comparable terms the properties of these schemata, and gives objective criteria for chosing a given schema when implementing a language analyzer. The approach taken here supports both theoreticM analysis and actuM experimentation, both for the computational behavior of pLmers and for the structure of the resulting shared forest. Many experiments and extensions still remain t 9 be made: improved dynamic programming interpretation of bottomup parsers, more extensive experimental measurements with a variety of languages and parsing schemata, or generalization of this approach to more complex situations, such as word lattice parsing [21, 30] , or even handling of \"secondary\" language features. Early research in that latter direction is promising: our framework and the corresponding paradigm for parser construction have been extended to full first-order Horn clauses [17, 18] , and are hence applicable to unification based grammatical formalisms [27] . Shared forest construction and analysis can be generalized in the same way to these more advanced formalisms.", "cite_spans": [ { "start": 711, "end": 715, "text": "[21,", "ref_id": "BIBREF22" }, { "start": 716, "end": 719, "text": "30]", "ref_id": "BIBREF31" }, { "start": 948, "end": 952, "text": "[17,", "ref_id": "BIBREF17" }, { "start": 953, "end": 956, "text": "18]", "ref_id": "BIBREF19" }, { "start": 1028, "end": 1032, "text": "[27]", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "This appc~dlx gives some of the experimental data gathered to c~npa~ compilation achemata~ For each grammar, the first table gives the size of the PDTs oh-t~dned by compiling it accordlnZ to several compilation schematL This size corresponds to the number of instructions genca'ated for the PDT, which is roughly the n,mher of possible PDT states.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Experimental Comparisons", "sec_num": null }, { "text": "The second table gives two figures far each schema and for sevm-al input sentences. The first figure is the number of items computed to parse that sentence with the given schema: it may be read as the number of computation steps and is thus \u2022 measure of computational ei~ciency. The second figure is the n,,ml~er of items r~n~in;ng after simp/ification of the output grarnm~, it is thus an indicator of shsx~g quality. Sharing is better when this second figure is low.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Experimental Comparisons", "sec_num": null }, { "text": "In these tables, columns beaded with LR/LALR stands for the LR(0), LR(1), LALR(1) and LALR(2) cases (which often give the same results), unlesa one of these cases has its own expl;clt column.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Experimental Comparisons", "sec_num": null }, { "text": "Tests were run on the GRE, NSE, UBDA and RR gramman of [3] : they did not exhibit the loss of eRiciency with incre~md look-ahead that was reported for the bottom-up look-ahead of [3] .", "cite_spans": [ { "start": 55, "end": 58, "text": "[3]", "ref_id": null }, { "start": 179, "end": 182, "text": "[3]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "C Experimental Comparisons", "sec_num": null }, { "text": "We believe the results presented here axe consistent and give an accurate comparison of performances of the parsers considered, despite some implementation departure from the strict theoretical model required by performance considerations. A tint version of our LL(0) compiler &,ave results that were inconsistent with the results of the bottom-up parsers. This was ,, due to & weakness in that LL(0) compiler which was then corrected. We consider this experience to be a conflrm~ion of the nsefuln~ of our uniform framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Experimental Comparisons", "sec_num": null }, { "text": "It must be stressed that these ~re prellmi~L~-y experiments. On the basis of thdr. ~,dysis, we intend a new set of experiments that will better exhibit the phenomena discussed in the paper. In particular we wish to study variants of the schen~ta and dynamic progr~nming interpretation that give the best p,~dble sharing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C Experimental Comparisons", "sec_num": null }, { "text": "Gr-mmar UBDA In these examples, the use of look-ahead give approximately a 25% gain in speed elliciency over LR(0) parsing, with the same fo~t shadng.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C.I", "sec_num": null }, { "text": "However the use of look-ahead rn~y increase the LR 1 Thin ~p-ammar is LR(1) but is not LALR. For each compilation scb,'ma it gives the same result on all possible inputs: aed, ae\u00a2, bec and bed. The termln,d f may be ambiguously parsed as X or as Y. This ambiguous left context increases uselessly the complexity of the LR(1) ~ during recognition of the A and B constituents. Hence LR(0) performs better in this case since it ignores the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C.I", "sec_num": null }, { "text": "Space cubic algorithms often require the lan~tage grammar to be in Chomsky Normal Form, and some authors have incorrectly conjectured tha~ cubic complexity cannot he obtained otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This was noted by Shell[26] and is implicit in his use of \"2form ~ grammars.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This grammar may be simplified by eliminating useless non-terminals, deriving on the empty string e or on a single other non-terminal. As in section 4, the simplified grammar may then be represented as a graph which is similar, with more details (the rules used for the subconstituents), to the graph given in figure 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Acknowledgements: We are grateful to V~ronique Donzeau-Gouge for many fruitful discussions. This work has been partially supported by the Eureka Software Factory (ESF) project.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null }, { "text": "The algorithm This is the formal description of a minimal dynamic programming PDT interpreter. The actual Tin interpreter has a larger instruction set. Comments are prefixed with ~. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A", "sec_num": null }, { "text": "To illustrate the creation of the shared forest, we present here informally a simplified sequence of transitions in their order of execution by a top-down parser. We indicate the transitions as Tin instructions on the left, as defined in appendix A. On the right we indicate the item and the rule produced by execution of each instruction: the item is the left-hand-side of the rule. The pseudo-instruction scan is given in italics because it does not exist, and stands for the parsing of a subconstituent: either several transitions for a complex constituent or a single shift instruction for a lexical constituent. The global behavior of scan is the same as that of ehif% and it may be understood as a shift on the whole sub-constituent.Items axe represented by a pair of integer. Hence we give no details about states or input, but keep just enough information to see how items axe inter-related when applying a pop transition: it must use two items of the form (a,b) and (b, c) as indicated by the algorithm.The symbol r stands for the rule used to recognize a constituent s, and ~ri stands for the rule used to recognize its i 'h sub-constituent ei. The whole sequence, minus the first and the last two instructions, would be equivalent to \"scan s'.\u2022 .. ,,.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpretation of a top-down PDT", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The Theory of Parsing, Trar~lation and Compiling", "authors": [ { "first": "A", "middle": [ "V" ], "last": "Aho", "suffix": "" }, { "first": "J", "middle": [], "last": "Ullman", "suffix": "" } ], "year": 1972, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aho, A.V.; and Ullman, J.D\u2022 1972 The Theory of Parsing, Trar~lation and Compiling. Prentice- Hall, Englewood Cliffs, New Jersey.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Analyseurs Syntaxiques et Non. D6terminigme. Th~se de Doctorat, Universit~ d'Ofl~ns la Source", "authors": [ { "first": "S", "middle": [], "last": "Billot~", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Billot~ S. 1988 Analyseurs Syntaxiques et Non. D6terminigme. Th~se de Doctorat, Universit~ d'Ofl~ns la Source, Orleans (France).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Efficient Parsing Algorithms for General Context-Free Grammars", "authors": [ { "first": "M", "middle": [], "last": "Bouckaert", "suffix": "" }, { "first": "A~", "middle": [], "last": "Pirotte", "suffix": "" }, { "first": "M", "middle": [], "last": "Sn~lllng", "suffix": "" } ], "year": 1975, "venue": "Information Sciences", "volume": "8", "issue": "1", "pages": "1--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bouckaert, M.; Pirotte, A~; and Sn~lllng, M. 1975 Efficient Parsing Algorithms for General Context- Free Grammars. Information Sciences 8(1): 1-26", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Programming Languages and Their Compilers. Courant Institute of Mathematical Sciences", "authors": [ { "first": "J", "middle": [], "last": "Cooke", "suffix": "" }, { "first": "J", "middle": [ "T" ], "last": "Schwartz", "suffix": "" } ], "year": 1970, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cooke, J.; ~nd Schwartz, J.T. 1970 Programming Languages and Their Compilers. Courant Insti- tute of Mathematical Sciences, New York Univer- sity, New York.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "On the Asymptotic Complexity of Matrix Multiplication", "authors": [ { "first": "D", "middle": [], "last": "Coppersmith", "suffix": "" }, { "first": "S", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1982, "venue": "SIAM Journal on Computing", "volume": "11", "issue": "3", "pages": "472--492", "other_ids": {}, "num": null, "urls": [], "raw_text": "Coppersmith, D.; and Winograd, S. 1982 On the Asymptotic Complexity of Matrix Multiplication. SIAM Journal on Computing, 11(3): 472-492.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Simple LR(k) Grammars", "authors": [ { "first": "F", "middle": [ "L" ], "last": "Deremer", "suffix": "" } ], "year": 1971, "venue": "Communications A CM", "volume": "14", "issue": "7", "pages": "453--460", "other_ids": {}, "num": null, "urls": [], "raw_text": "DeRemer, F.L. 1971 Simple LR(k) Grammars. Communications A CM 14(7): 453-460.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "An Efficient Context-Free Parsing Algorithm", "authors": [ { "first": "J", "middle": [], "last": "Earley", "suffix": "" } ], "year": 1970, "venue": "Communications ACM", "volume": "13", "issue": "2", "pages": "94--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Earley, J. 1970 An Efficient Context-Free Parsing Algorithm. Communications ACM 13(2): 94-102.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Proceedings of the Workshop on Paxtial Evaluation and Mixed Computation", "authors": [ { "first": "Y", "middle": [], "last": "Fntamura", "suffix": "" } ], "year": 1988, "venue": "New Generation Computing", "volume": "6", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fntamura, Y. (ed.) 1988 Proceedings of the Work- shop on Paxtial Evaluation and Mixed Computa- tion. New Generation Computing 6(2,3).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An Improved Context-Free Recognizer", "authors": [], "year": null, "venue": "A CM Transactions on Programming Languages and Systems", "volume": "2", "issue": "3", "pages": "415--462", "other_ids": {}, "num": null, "urls": [], "raw_text": "An Improved Context-Free Recognizer. A CM Transactions on Programming Languages and Sys- tems 2(3): 415-462.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "On the Relative Efficiencies of Context-Free Grammar Recognizers", "authors": [ { "first": "L", "middle": [], "last": "Griffiths", "suffix": "" }, { "first": "S", "middle": [], "last": "Petrick", "suffix": "" } ], "year": 1965, "venue": "Communications A CM", "volume": "8", "issue": "5", "pages": "289--300", "other_ids": {}, "num": null, "urls": [], "raw_text": "Griffiths, L; and Petrick, S. 1965 On the Relative Efficiencies of Context-Free Grammar Recogniz- ers. Communications A CM 8(5): 289-300.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Automatic Language-Data Proceesing", "authors": [ { "first": "D", "middle": [ "G" ], "last": "Hays", "suffix": "" } ], "year": 1962, "venue": "Computer Applications in the Behavioral Sciences", "volume": "", "issue": "", "pages": "394--423", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hays, D.G. 1962 Automatic Language-Data Pro- ceesing. In Computer Applications in the Behav- ioral Sciences, (H. Borko ed.), Prentice-Hall, pp. 394-423.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A Technique for Generating Almost Optimal Floyd-Evans Productions for Precedence Grammars", "authors": [ { "first": "J", "middle": [ "D" ], "last": "Ichbiah", "suffix": "" }, { "first": "S", "middle": [ "P" ], "last": "Morse", "suffix": "" } ], "year": 1970, "venue": "Communications ACM", "volume": "13", "issue": "8", "pages": "501--508", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ichbiah, J.D.; and Morse, S.P. 1970 A Technique for Generating Almost Optimal Floyd-Evans Pro- ductions for Precedence Grammars. Communica- tions ACM 13(8): 501-508.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Report of Univ. of Hawaii, also AFCRL-65-758, Air Force Cambridge Research Labor~-tory", "authors": [ { "first": "J", "middle": [], "last": "Kuami", "suffix": "" } ], "year": 1965, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuami, J. 1965 An E~icient Recognition and Slmtax Analysis Algorithm .for Context-Free Lan. geages. Report of Univ. of Hawaii, also AFCRL- 65-758, Air Force Cambridge Research Labor~- tory, Bedford (Massachusetts), also 1968, Univer- sity of Illinois Coordinated Science Lab. Report, No. R-257.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Algorithm Schemata and Data Structures in Syntactic Processing", "authors": [ { "first": "M", "middle": [], "last": "Kay", "suffix": "" } ], "year": 1980, "venue": "Proceedings oy the Nobel Symposium on Text Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kay, M. 1980 Algorithm Schemata and Data Structures in Syntactic Processing. Proceedings oy the Nobel Symposium on Text Processing, Gothen- burg.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Deterministic Techniques for Efficient Non-deterministic Parsers", "authors": [ { "first": "B", "middle": [], "last": "Lung", "suffix": "" } ], "year": 1974, "venue": "Proc. oy the 2 \"~ Colloquium on Automata", "volume": "14", "issue": "", "pages": "255--269", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lung, B. 1974 Deterministic Techniques for Effi- cient Non-deterministic Parsers. Proc. oy the 2 \"~ Colloquium on Automata, Languages and Pro- gramming, J. Loeckx (ed.), Saarbrflcken, Springer Lecture Notes in Computer Science 14: 255-269. Also: Rapport de Recherche 72, IRIA-Laboris, Rocquencourt (France).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Parsing Incomplete Sentences", "authors": [ { "first": "B", "middle": [], "last": "Lung", "suffix": "" } ], "year": 1988, "venue": "Proc. of the 12 en Internat. Cony. on Computational Linguistics (COLING'88)", "volume": "1", "issue": "", "pages": "365--371", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lung, B. 1988 Parsing Incomplete Sentences. Proc. of the 12 en Internat. Cony. on Computational Lin- guistics (COLING'88) \"CoL 1:365-371, D. Vargha (ed.), Budapest (Hungary).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Datalog Automata", "authors": [ { "first": "B", "middle": [], "last": "Lung", "suffix": "" } ], "year": 1988, "venue": "Proc. of the rd", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lung, B. 1988 Datalog Automata. Proc. of the rd", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Complete Evaluation of Horn Clauses, an Automata Theoretic Approach", "authors": [ { "first": "B", "middle": [], "last": "Lung", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lung, B. 1988 Complete Evaluation of Horn Clauses, an Automata Theoretic Approach. INRIA Research Report 913.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The Systematic Construction of Eadey Parsers: Application to the Production o/ O(n 6) Earle~ Parsers for Tree Adjoining Grammars", "authors": [ { "first": "B", "middle": [], "last": "Lank", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "LanK, B. 1988 The Systematic Construction of Eadey Parsers: Application to the Production o/ O(n 6) Earle~ Parsers for Tree Adjoining Gram- mars. In preparation.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A Massively Psrallel Network-Based Natural Language Parsing System", "authors": [ { "first": "T", "middle": [], "last": "Li", "suffix": "" }, { "first": "H", "middle": [ "W" ], "last": "Chun", "suffix": "" } ], "year": 1987, "venue": "Proc. ol \u00a3nd Int. Cony. on Computers and Applications Beijing", "volume": "", "issue": "", "pages": "401--408", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, T.; and Chun, H.W. 1987 A Massively Psral- lel Network-Based Natural Language Parsing Sys- tem. Proc. ol \u00a3nd Int. Cony. on Computers and Applications Beijing (Peking), : 401-408.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Spoken Sentence Recognition by Time-Synchronous Parsing Algorithm of Context-Free Grammar", "authors": [ { "first": "S", "middle": [], "last": "Nakagawa", "suffix": "" } ], "year": 1987, "venue": "Proc. ICASSP", "volume": "87", "issue": "", "pages": "829--832", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nakagawa, S. 1987 Spoken Sentence Recogni- tion by Time-Synchronous Parsing Algorithm of Context-Free Grammar. Proc. ICASSP 87, Dallas (Texas), Vol. 2 : 829-832.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Deftuite Clause Grammars for Language Analysis --Asurvey of the Formalism and a Comparison with Augmented Transition Networks", "authors": [ { "first": "F", "middle": [ "C N" ], "last": "Pereira", "suffix": "" }, { "first": "D", "middle": [ "H D" ], "last": "Warren", "suffix": "" } ], "year": 1980, "venue": "Artificial Intel. ligence", "volume": "13", "issue": "", "pages": "231--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pereira, F.C.N.; and Warren, D.H.D. 1980 Deft- uite Clause Grammars for Language Analysis -- Asurvey of the Formalism and a Comparison with Augmented Transition Networks. Artificial Intel. ligence 13: 231-278.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A Simple Efficient Parser for Phrase-Structure Grammars", "authors": [ { "first": "J", "middle": [ "D" ], "last": "Phillips", "suffix": "" } ], "year": 1986, "venue": "Quarterly Newsletter of the Soc. for the Study of Artificial Intelligence (AISBQ)", "volume": "59", "issue": "", "pages": "14--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phillips, J.D. 1986 A Simple Efficient Parser for Phrase-Structure Grammars. Quarterly Newslet- ter of the Soc. for the Study of Artificial Intelli- gence (AISBQ) 59: 14-19.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Proceedings of the Jth IJCAI", "authors": [ { "first": "V", "middle": [ "R" ], "last": "Pratt", "suffix": "" } ], "year": 1975, "venue": "", "volume": "", "issue": "", "pages": "422--428", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pratt, V.R. 1975 LINGOL --A Progress Report. In Proceedings of the Jth IJCAI: 422-428.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A Parser Generator for Finitely Ambiguous Context-Free Grammars", "authors": [ { "first": "J", "middle": [], "last": "Rekers", "suffix": "" } ], "year": 1987, "venue": "Computer Science/Dpt. of Software Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rekers, J. 1987 A Parser Generator for Finitely Ambiguous Context-Free Grammars. Report CS- R8712, Computer Science/Dpt. of Software Tech- nology, Centrum voor Wiskunde en Informatica, Amsterdam (The Netherlands).", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Observations on Context Free Parsing", "authors": [ { "first": "B", "middle": [ "A" ], "last": "Sheil", "suffix": "" } ], "year": 1976, "venue": "Pros. of Internat. Conf. on Computational Linguistics (COLING-76)", "volume": "", "issue": "", "pages": "71--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sheil, B.A. 1976 Observations on Context Free Parsing. in Statistical Methods in Linguistics:. 71- 109, Stockholm (Sweden), Pros. of Internat. Conf. on Computational Linguistics (COLING-76), Or- taw'4 (Canada).", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "The Design of a Computer Language for Linguistic Information", "authors": [ { "first": "S", "middle": [ "M" ], "last": "Shieber", "suffix": "" } ], "year": 1984, "venue": "Proc. of the 10 'h Internat. Cony. on Computational Linguistics --COLING", "volume": "84", "issue": "", "pages": "362--366", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shieber, S.M. 1984 The Design of a Computer Language for Linguistic Information. Proc. of the 10 'h Internat. Cony. on Computational Linguistics --COLING'84: 362-366, Stanford (California).", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Using Restriction to Extend Parsing Algorithms for Complex-Feature-Based Formalisms", "authors": [ { "first": "S", "middle": [ "M" ], "last": "Shieber", "suffix": "" } ], "year": 1985, "venue": "Proceedings oy the ~3rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "145--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shieber, S.M. 1985 Using Restriction to Extend Parsing Algorithms for Complex-Feature-Based Formalisms. Proceedings oy the ~3rd Annual Meet- ing of the Association for Computational Linguis- tics: 145-152.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "MCHART: A Flexible, Modular Chart Parsing System", "authors": [ { "first": "H", "middle": [], "last": "Thompson", "suffix": "" } ], "year": 1983, "venue": "Proc. of the National Conf. on Artificial Intelligence (AAAI-83), Washington", "volume": "", "issue": "", "pages": "408--410", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thompson, H. 1983 MCHART: A Flexible, Mod- ular Chart Parsing System. Proc. of the National Conf. on Artificial Intelligence (AAAI-83), Wash- ington (D.C.), pp. 408-410.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "An Efficient Word Lattice Parsing Algorithm for Continuous Speech Recognition", "authors": [ { "first": "M", "middle": [], "last": "Tomita", "suffix": "" } ], "year": 1986, "venue": "Proceedings oy IEEE-IECE-ASJ International Conference on Acoustics, Speech, and Signal Pro-\u00a2essing (ICASSP 86)", "volume": "3", "issue": "", "pages": "1569--1572", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomita, M. 1986 An Efficient Word Lattice Pars- ing Algorithm for Continuous Speech Recognition. In Proceedings oy IEEE-IECE-ASJ International Conference on Acoustics, Speech, and Signal Pro- \u00a2essing (ICASSP 86), Vol. 3: 1569-1572.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "An Efficient Augmented-Context-Free Parsing Algorithm", "authors": [ { "first": "M", "middle": [], "last": "Tomita", "suffix": "" } ], "year": 1987, "venue": "Computational Linguistics", "volume": "13", "issue": "1-2", "pages": "31--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomita, M. 1987 An Efficient Augmented- Context-Free Parsing Algorithm. Computational Linguistics 13(1-2): 31-46.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Graph-structured Stack and Natural Language Parsing", "authors": [ { "first": "M", "middle": [], "last": "Tomita", "suffix": "" } ], "year": 1988, "venue": "Proceedings oy the 26 th Annual Meeting Of the Association for Computa. tional Linguistics", "volume": "", "issue": "", "pages": "249--257", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomita, M. 1988 Graph-structured Stack and Nat- ural Language Parsing. Proceedings oy the 26 th Annual Meeting Of the Association for Computa. tional Linguistics: 249-257.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A Bottom-UpParser based on Predicate Logic: A Survey of the Formalism and its Implementation Technique", "authors": [ { "first": "K", "middle": [], "last": "Uehaxa", "suffix": "" }, { "first": "R", "middle": [], "last": "Ochitani", "suffix": "" }, { "first": "", "middle": [], "last": "Kaknsho", "suffix": "" }, { "first": "J", "middle": [], "last": "Toyoda", "suffix": "" } ], "year": 1984, "venue": "198~ In\u2022ernst. Syrup. on Logic P~mming", "volume": "", "issue": "", "pages": "220--227", "other_ids": {}, "num": null, "urls": [], "raw_text": "Uehaxa, K.; Ochitani, R.; Kaknsho, 0.; Toyoda, J. 1984 A Bottom-UpParser based on Predicate Logic: A Survey of the Formalism and its Im- plementation Technique. 198~ In\u2022ernst. Syrup. on Logic P~mming, Atlantic City (New Jersey), : 220-227.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Department of Defense 1983 Reference Manual for the Ada Programming Language", "authors": [ { "first": "U", "middle": [ "S" ], "last": "", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "U.S. Department of Defense 1983 Reference Manual for the Ada Programming Language. ANSI/MIL-STD-1815 A.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "General Context-Free Recognition in Less than Cubic Time", "authors": [ { "first": "L", "middle": [ "G" ], "last": "Valiant", "suffix": "" } ], "year": 1975, "venue": "Journal of Computer and System Sciences", "volume": "10", "issue": "", "pages": "308--315", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valiant, L.G. 1975 General Context-Free Recog- nition in Less than Cubic Time. Journal of Com- puter and System Sciences, 10: 308-315.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Eealuateur de Clauaes de Horn", "authors": [ { "first": "E", "middle": [], "last": "Villemonte De La Clergerie", "suffix": "" }, { "first": "A", "middle": [], "last": "Zanchetta", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Villemonte de la Clergerie, E.; and Zanchetta, A. 1988 Eealuateur de Clauaes de Horn. Rapport de Stage d'Option, Ecole Polytechulque, Palaise&u (n'auce).", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "The Programming Language Pascal", "authors": [ { "first": "N", "middle": [], "last": "Wirth", "suffix": "" } ], "year": 1971, "venue": "Acta Informatica", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wirth, N. 1971 The Programming Language Pas- cal. Acta Informatica, 1(1).", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Parsing Spoken Phrases Despite Missing Words", "authors": [ { "first": "W", "middle": [ "H" ], "last": "Ward", "suffix": "" }, { "first": "A", "middle": [ "G" ], "last": "Hauptmann", "suffix": "" }, { "first": "R", "middle": [ "M" ], "last": "Stern", "suffix": "" }, { "first": "T", "middle": [], "last": "Chanak", "suffix": "" } ], "year": 1988, "venue": "Proceedings of the 1988 International Conference on Acot~tics, Speech, and Signal Processing", "volume": "1", "issue": "", "pages": "275--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ward, W.H.; Hauptmann, A.G.; Stern, R.M.; and Chanak, T. 1988 Parsing Spoken Phrases Despite Missing Words. In Proceedings of the 1988 In- ternational Conference on Acot~tics, Speech, and Signal Processing (ICASSP 88), Vol. 1: 275-278.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Recognition and Parsing of Context-Free Languages in Time n 3. Information and Control", "authors": [ { "first": "D", "middle": [ "H" ], "last": "Younger", "suffix": "" } ], "year": 1967, "venue": "", "volume": "10", "issue": "", "pages": "189--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Younger, D.H. 1967 Recognition and Parsing of Context-Free Languages in Time n 3. Information and Control, 10(2): 189-208", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": ", by extending Earley's dynamic programming construction to PDTs, Long provided in[15] a way of simulating all possible computations of any PDT in cubic time and space complexs Grifllth & Petrick actually use Turing ma,'hines for pedagogical reasons." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "Bottom-up parse-tree ' Top-down parse-tree" }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "up ::-'up 'pp (6) 'pp ::-prep 'up (7) 'vp ::= v 'up Nonterminals are prefixed with a quote symbol The first rule is used for initialization and handlhg of the delimiter symbol 8. The $ delimiters are implicit in the actual input sentence." }, "FIGREF3": { "type_str": "figure", "uris": null, "num": null, "text": "Grammar of parses of the input sentenceThe two parses of the input sentence defined by this gram-" }, "FIGREF4": { "type_str": "figure", "uris": null, "num": null, "text": "Graph of the output grammar The shared forest" }, "FIGREF5": { "type_str": "figure", "uris": null, "num": null, "text": "parser size quadratically with the granunar size. Still, a better engineered LR(1) construction should not usually increase that size as dranmticaily as indicated by our experimentalfigure." }, "TABREF0": { "type_str": "table", "html": null, "content": "
LR(0)LR(1)LALR(1)LALR(2)preced.LL(0)
11034110410490116
input stringLFt/LALRpreced.LL(0)
n \u2022 n prep n71.4772 -47169 -43
n \u2022 n (prep n) 2146 -97141 -93260 -77
n \u2022 u (Fep n) 3260 -172245 -161371 -122
n \u2022 n (prep n) s854 -541775 -491844 -317
C.4Grammarof Adaexpressions
it ::'A\u2022J \u2022
LR(0) [ LR(1)LALR(1)LALR(2)preced.LL(0)
386041413646
input stringLR/LALRprece(LLL(0)
14-915-941 -9
ma23-1529-1575 -15
&aaaam249-156226 -124391 -112
C.2Gr-mmar RR
\u2022 ::-x\u2022l \u2022
gramm~ is LALR(1) but not LR(0), which explains the
lower performance of the LR(O) parser.
LB.(0) LR(1)LALR(1)LALR(2)preced.LL(0)
343737374846
input stringLR(0)LR/LALRpreced.LL(O)
\u202214-914-915-928-9
xx23-1320-1325 -1343-13
xxzxxz99-2944 -2956 -29123-29
C.3Picogrsmmar of English
::= v irP
PP : :\u2022 prep lip
", "num": null, "text": "This grimm&r, too long for inclusion h~e, is the grammar of expressions of the ]an~cru~e AdsN as given in the reference manual [3@ This grammar is ambiguous." } } } }