ACL-OCL / Base_JSON /prefixI /json /iwpt /1991.iwpt-1.25.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "1991",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:35:01.235451Z"
},
"title": "Substring Parsing for Arbitrary Context-Free Grammars",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Rekers",
"suffix": "",
"affiliation": {},
"email": "rekers@cwi.nl"
},
{
"first": "Wilco",
"middle": [],
"last": "Koorn",
"suffix": "",
"affiliation": {
"laboratory": "Programming Research Group",
"institution": "University of Amsterdam",
"location": {
"postBox": "P.O. Box 41882",
"postCode": "1009 DB",
"settlement": "Amsterdam",
"country": "The Netherlands"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A su bstring recognizer for a language L determines whether a string s is a substring of a sentence in L, i.e., substring-recognize(s) succeeds if and only if 3v , w: vsw E L. The algorithm for sub string recognition presented here accepts general context-free grammars and uses the same parse tables as the parsing algorithm from which it was derived. Substring recognition is useful for non correcting syntax error recovery and for incremen tal parsing. By extending the substring recognizer with the ability to generate trees for the possible contextual completions of the substring, we obtain a substring parser, which can be used in a syntax directed editor to complete fragments of sentences.",
"pdf_parse": {
"paper_id": "1991",
"_pdf_hash": "",
"abstract": [
{
"text": "A su bstring recognizer for a language L determines whether a string s is a substring of a sentence in L, i.e., substring-recognize(s) succeeds if and only if 3v , w: vsw E L. The algorithm for sub string recognition presented here accepts general context-free grammars and uses the same parse tables as the parsing algorithm from which it was derived. Substring recognition is useful for non correcting syntax error recovery and for incremen tal parsing. By extending the substring recognizer with the ability to generate trees for the possible contextual completions of the substring, we obtain a substring parser, which can be used in a syntax directed editor to complete fragments of sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A recognizer for a language L determines whether a sentence s belongs to L. A substring recognizer performs a more complicated job, as it determines whether s can be part of a sentence of L.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A recently developed substring recognition al gorithm [4) uses an ordinary LR parsing algo rithm with special parse tables. Fo r ordinary pars ing, this parsing algorithm is limited to LR(l) grammars, but the more complicated nature of substring recognition limits it to bounded-context grammars ( see Section 3) .",
"cite_spans": [
{
"start": 54,
"end": 57,
"text": "[4)",
"ref_id": "BIBREF3"
},
{
"start": 302,
"end": 312,
"text": "Section 3)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "algorithm that does not suffer from this drawback. It accepts general\u2022 context-free grammars and uses the same parse tables as the ordinary parser. Our algorithm is based on the pseudo-parallel parsing algorithm of Tomita [17) , which runs a dynami cally varying number of LR parsers in parallel and accepts general context-free grammars. In Sec tion 5 we extend the su bstring recognizer into a substring parser that generates trees for the pos sible completions of the substring.",
"cite_spans": [
{
"start": 222,
"end": 226,
"text": "[17)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "218",
"sec_num": null
},
{
"text": "In its simplest form, a parser stops at the first syn tax error found. If it has to find as many errors in the input as possible, it can try to correct the error in order to continue parsing. Spurious er rors are easily introduced, however, if the parser makes false assumptions about the kind of error encountered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ".1 Syntax error recovery",
"sec_num": "2"
},
{
"text": "Substring parsing can be . used \u2022to implement noncorrecting syntax error recovery. If an \ufffdrdi nary parser detects a syntax error on s. ome sym bol, the substring parser can be started on the next symbol to discover additional syntax errors. Using this method, it is not necessary to let the parser make any assumption about how to correct the error, or to let it skip input until a trusted symbol is found. [15] . He proves that his technique does not generate spurious er rors, but is not explicit about its implementation.",
"cite_spans": [
{
"start": 407,
"end": 411,
"text": "[15]",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": ".1 Syntax error recovery",
"sec_num": "2"
},
{
"text": "He notes, however, that there are difficulties in keeping the substring parser deterministic due to a limitation on the class of grammars accepted. Our technique could be useful here, as it imple ments the required substring analysis for general context-free grammars.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Richter defines noncorrecting syntax error re covery with the aid of su bstring parsing and inter val analysis in a formal framework",
"sec_num": null
},
{
"text": "In Section 5 we will show how the substring rec ognizer can be extended so that it generates parse trees for the possible completions of a substring. As the total number of p' ossible completions will often be infinite, only generic completions are gen erated. A syntax-directed editor could use these to corn plete fragments of sentences in accordance with the grammar used, or to guess the continua tion of what the user is typing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Completion tool",
"sec_num": "2.2"
},
{
"text": "Another application for substring parsing is in in cremental parsing. Incremental parsing can be performed by attaching parser states to tokens (3, 1, 18] . Afte\u2022 r a modification has been made, the parser is restarted in a saved state, at a point in the text just before the modification. Pars ing stops when the parser reaches a token after the modification in an old configuration (if ever). These methods are very good as to minimizing the amount of recomputation after a modification, but require a huge amount of memory for storing the states of the parser (parse stacks with partial parse trees as elements).",
"cite_spans": [
{
"start": 144,
"end": 147,
"text": "(3,",
"ref_id": "BIBREF2"
},
{
"start": 148,
"end": 150,
"text": "1,",
"ref_id": "BIBREF0"
},
{
"start": 151,
"end": 154,
"text": "18]",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental parsing",
"sec_num": "2.3"
},
{
"text": "Ghezzi and Mandrioli present an alternative technique for incremental parsing. (7, 8] If the string xxzyy is modified to xxzyy, where x and y have length k, with k the look-ahead used by the parser, then the parse trees previously gen erated for x and y are still valid : iJter the modi fication. All subtrees previously generated for x and y can thus be abbreviated by their top non terminals, which minimizes the length of the string to be reparsed. This technique is both time and space efficient, but is not applicable to general context-free parsing as it requires a fixed look ahead. In our particular case, we need incremental parsing in a syntax-directed editor that uses the To mita parser. By running a varying number of LR-parsers in parallel, the To mita parser adj usts its look-ahead dynamically to the amount needed, and is thus not limited to an a priori known k.",
"cite_spans": [
{
"start": 79,
"end": 82,
"text": "(7,",
"ref_id": "BIBREF6"
},
{
"start": 83,
"end": 85,
"text": "8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental parsing",
"sec_num": "2.3"
},
{
"text": "Incremental parsing can also be achieved in an-other manner: after a modification has been made in the text , find the substring s 1 belonging to the smallest subtree that contains the modification in the stored parse tree. If the type of this subtree is T and s' can be parsed as a tree of type T, replace the old subtree by the new one. If s 1 fails to parse, it may be the case that the modification intro duced a syntax error, or that the subtree has been chosen too small. These two cases must be distin guished, as the incremental parser proceeds in a different way in each case. A su bstring parser can provide a hint as to which of the two possibilities is actually the case. If the substring parser fails on s', the modification will be syntactically incorrect in any context, and an error message can be given. If the substring parser succeeds, a larger subtree is chosen and parsing is retried. This can be more time consuming than remembering parser states, but the amount of memory needed is far less. We consider using this scheme in the syntax-directed editor GSE [11) , but it has to be investigated fur ther as a lot of work is still performed twice.",
"cite_spans": [
{
"start": 1079,
"end": 1083,
"text": "[11)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental parsing",
"sec_num": "2.3"
},
{
"text": "Cormack [4] describes a substring parse tech nique for Floyd's class of bounded context or BC(l,1) grammars [6] , and implements the sub string parser Richter mentions [15] . A grammar is BC(l,1) if for every rule A ::= a, if some sentential form contains aab where a is derived from A then O'. is derived from A in all sentential forms contain ing aab. This class is smaller than LR( 1). The solution of Cormack consists in using an ordinary LR automaton, but a special parse table construc tor. The sets of items generated do not only con tain items of the form A ::= a./3 but also \"suffix items\" of the form A ::= \u2022 \u2022 \u2022 ./3. These suffix items denote partial handles whose origins occur before the beginning of the input. The generated parse tables are deterministic, provided that the gram mar is BC(l,l). This substring parser is used for noncorrecting error recovery in a parser for Pascal. The BC(l,1) limitation on the grammar caused problems in the definition of Pascal, which where alleviated by permitting the parse table generator to rewrite the grammar if necessary.",
"cite_spans": [
{
"start": 8,
"end": 11,
"text": "[4]",
"ref_id": "BIBREF3"
},
{
"start": 108,
"end": 111,
"text": "[6]",
"ref_id": "BIBREF5"
},
{
"start": 168,
"end": 172,
"text": "[15]",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "Lang describes a method for parsing sentences containing an arbitrary number of unknown parts of unknown length [12] . The parser produces a fi nite representation of all possible parses ( often in finite in number) that could account for the miss ing parts. The implementation of this method is based on Earley parsing [5] , as is the Tomita algo rithm we use in our own substring parser. The ba sic idea of Lang's method is that \"in the precence of the unknown subsequence *, scanning transi tions may be applied any number of times to the same computation thread, without shifting the in put stream.\" This process terminates, as parsers in the same state are joined and the number of states is finite. This method is very elegant and powerful, and can be used as a su bstring parser {by providing it with the string \"*s*\"). We will not use it, however, as it is more general than what we need. Whether it would be\u2022 \ufffdflicient enough for interactive purposes is unclear. [16] (also see Section 5.2).",
"cite_spans": [
{
"start": 112,
"end": 116,
"text": "[12]",
"ref_id": "BIBREF11"
},
{
"start": 320,
"end": 323,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 972,
"end": 976,
"text": "[16]",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "We base the implementation of our substring parser on Tomita' For a full description of the To mita parsing algo rithm we refer .to Tomita [17] , to Nozohoor-Farshi who corrected an error in the algorithm concerning e:-productions [13] , or to Rekers who extended the algorithm to the full class of context-free gram mars by including cyclic grammars 1 [14] . For a detailed explanation of LR parsing [2, eh. 4. 7] is recommended.",
"cite_spans": [
{
"start": 54,
"end": 61,
"text": "Tomita'",
"ref_id": null
},
{
"start": 139,
"end": 143,
"text": "[17]",
"ref_id": null
},
{
"start": 231,
"end": 235,
"text": "[13]",
"ref_id": "BIBREF12"
},
{
"start": 353,
"end": 357,
"text": "[14]",
"ref_id": "BIBREF13"
},
{
"start": 401,
"end": 414,
"text": "[2, eh. 4. 7]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tomita parsing",
"sec_num": "4.1"
},
{
"text": "1 Grammars in which A\ufffdA is a possible derivation 220",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tomita parsing",
"sec_num": "4.1"
},
{
"text": "The grammar for which our substring recognition algorithm works should be reduced in such a way that it does not contain non-terminals that can not produce any terminal string or f. These non terminals can be identified easily, and all rules in which they appear should be removed from the grammar. This clean-up operation does not affect the language recognized. [ ",
"cite_spans": [
{
"start": 364,
"end": 365,
"text": "[",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The grammar",
"sec_num": "4.2"
},
{
"text": "We extend the substring recognizer into a sub string parser by .generating parse trees for stib strings If the rule A ::= a/3 is reduced with only nodes for /3 on the stack, however, additional nodes are created for a. In this way, the parse trees for the possible prefixes of s are created.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 5 Substring Parsing",
"sec_num": null
},
{
"text": "Parse trees for postfixes of s are created in the same way : after processing s the parser has to finish all rules which are in the process of being recognized. These are the rules in the kernel of the current state of the parser. If only a has been seen from a rule A ::= a/3, the rule is reduced and additional nodes are created for /3. It can even be the case that only f3 has been recognized from a rule A ::= a/3 1 , . and that nodes must\u2022 be created for both a and ,\u2022 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 5 Substring Parsing",
"sec_num": null
},
{
"text": "By producing only parse trees that are most gen eral, the number of possible completions is re duced, but it is often still too large and not even always finite. We propose the following rules to limit this number still further:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Further reduction of the number of possible completions",
"sec_num": "5.2"
},
{
"text": "1. The parse trees generated are kept as compact as possible by disallowing derivations of the form A/4a.A, A/4a.A,8, and A/4A/3, where only A has actually been recognized and all el ements of a and /3 would produce elements in CT1 or CT2 . Clearly, such derivations can be re peated infinitely often. They are undesirable as they only enlarge 0-1 or 0-2 . For example, the substring \") : + 5 then if\" also has a possible completion",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Further reduction of the number of possible completions",
"sec_num": "5.2"
},
{
"text": "if Exp + ( Exp ) + 5 then if Exp then Stat \ufffd \ufffd s \ufffd",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Further reduction of the number of possible completions",
"sec_num": "5.2"
},
{
"text": "whose parse tree is given in Figure 3 . In this tree a subtree for the rule Exp : := Exp + Exp has been inserted in the prefix.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Further reduction of the number of possible completions",
"sec_num": "5.2"
},
{
"text": "The number of possible sentential forms for which parse trees are generated is now finite, but these can still have infinitely many parse trees as the grammar may be cyclic. Rekers de scribes how to parse and generate parse graphs for cyclic grammars [14) . The cycles generated in this graph can be removed by his routine remove-cycles. This results in a finite number of most general completions. 3. In the generation of the postfixes of s a choice can be made for the \"simplest\" completion.",
"cite_spans": [
{
"start": 251,
"end": 255,
"text": "[14)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "That is, if a substring can be completed ac cording to both A ::= 0:/3 and A : : = a,, and l/31 < 1,1, we prefer A ::= a/3. In the exam ple of Figure 2 this rule forbids the choice of the \"if-then-else\" rule, as the \"if-then\" rule al ready applies. Snelting's rule \"prefer reduce items over shift items\" [16) is similar to ours. It can also be formulated as: if completion ac cording to both A ::= a and B ::= a:1 ( 1 =/= t: )",
"cite_spans": [
{
"start": 304,
"end": 308,
"text": "[16)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 143,
"end": 151,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "222",
"sec_num": null
},
{
"text": "is possible, then prefer A ::= a. We consider our rule more appropriate, as we take the case of /3 being non-empty but shorter than , into account as well, and we only make the choice if the two rules reduce to the same non-terminal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "222",
"sec_num": null
},
{
"text": "Otherwise, the rule A ::= a might be preferred over B : := a. 1 , whereas the environment in which the substring is completed needs a tree of type B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "222",
"sec_num": null
},
{
"text": "Our first measurement compares the-substring rec ognizer with the To mita recognizer from which it was\u2022 derived t.o learn the additional costs of sub string parsing. 1 We have taken a grammar of about twenty rules and sentences of increasing length. These were parsed by the Tomita rec_ognizer first. The result ing parse times are indicated in. Figure: 4 with a . \"\u2022\" . Next, the same strings minus a randomly chosen prefix were given to the substring parser. Figure 4 with a \"a\" .",
"cite_spans": [
{
"start": 166,
"end": 167,
"text": "1",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 346,
"end": 353,
"text": "Figure:",
"ref_id": null
},
{
"start": 461,
"end": 469,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Measurements",
"sec_num": "6"
},
{
"text": "It turns out that the substring parser has a moderate overhead with respect to the normal parser. This overhead can be interpreted as the time needed for the su bstring parser to get on the \"right track\" . As Figure 5 shows, the variations in this overhead are caused by the random cutting of the string. For some strings it takes longer than for others to determine of which language construct it can -be a su bstring. The larger the grammar is, the more alternatives are available and therefore the higher the variation.",
"cite_spans": [],
"ref_spans": [
{
"start": 209,
"end": 217,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Measurements",
"sec_num": "6"
},
{
"text": "In Figure 5 we compared the time taken by the substring parser on 30 randomly chosen parts of Pascal sentences of 100 tokens. The dots indi cate the amount of time needed and they are at tributed with the first symbol of the substring. These measurements show that sentences starting with a token that can appear in many differents contexts, like \"Id\" or \")\", take more time to recog nize than sentences starting with a disambiguating token like \": =\" or \"else\".",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Measurements",
"sec_num": "6"
},
{
"text": "The adaptation of the To mita algorithm to sub string parsing results in a very elegant and power ful algorithm. The main advantage of the fact that it accepts-general context-free grammars and uses ordinary LR parse tables is that substring parsing can now be applied in a very general manner, in stead of only to carefully written grammars and at the cost of an extra generation phase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7 Conclusions",
"sec_num": "223"
},
{
"text": "Substring parsing is slower than ordinary pars ing, but this will not be a serious drawback for its application as an error recovery technique or as a completion tool. The use of the substring parser in incremental parsing, however, has to be inves tigated further.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7 Conclusions",
"sec_num": "223"
},
{
"text": "The measurements were performed on a SUN Spare station. The programs were written in Lisp. The time used by the lexical scanner has not been taken into account.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Nigel Horspool, who sug gested to extend our implementation of the To mita algorithm to substring parsing. Two years after this discussion we finally saw the need for such a technique and started a serious investiga tion. Next, we are grateful to Paul Hendriks who pointed out a valuable simplification in the treat ment of incomplete reductions in the substring parser, and to Jan Heering for his careful reading of earlier versions of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknow ledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An efficient in cremental LR parser for grammars with ep silon productions",
"authors": [
{
"first": "R",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "K",
"middle": [
"D"
],
"last": "Detro",
"suffix": ""
}
],
"year": 1983,
"venue": "Acta Informatica",
"volume": "19",
"issue": "",
"pages": "369--376",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Agrawal and K.D. Detro. An efficient in cremental LR parser for grammars with ep silon productions. Acta Informatica, 19:369- 376, 1983.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Compilers. Principles, Techniques and Tools",
"authors": [
{
"first": "A",
"middle": [
"V"
],
"last": "Aho",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sethi",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Ullman",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.V. Aho, R. Sethi, and J.D. Ullman. Compilers. Principles, Techniques and Tools. Addison-Wesley, 1986.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Incremental LR parsers",
"authors": [
{
"first": "A",
"middle": [],
"last": "Celentano",
"suffix": ""
}
],
"year": 1978,
"venue": "Acta Informatica",
"volume": "10",
"issue": "",
"pages": "307--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Celentano. Incremental LR parsers. Acta Informatica, 10:307-321, 1978.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An LR substring parser for noncorrecting syntax error recovery",
"authors": [
{
"first": "G",
"middle": [
"V"
],
"last": "Cormack",
"suffix": ""
}
],
"year": 1989,
"venue": "Pro ceedings of the SIGPLAN'89 Conference on Programming Language Design and Imple mentation",
"volume": "24",
"issue": "",
"pages": "161--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G.V. Cormack. An LR substring parser for noncorrecting syntax error recovery. In Pro ceedings of the SIGPLAN'89 Conference on Programming Language Design and Imple mentation, pages 161-169, 1989. Appeared as SIGPLAN Notices 24(7).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An efficient context-free parsing algorithm",
"authors": [
{
"first": "J",
"middle": [],
"last": "Earley",
"suffix": ""
}
],
"year": 1970,
"venue": "Communications of the ACM",
"volume": "13",
"issue": "2",
"pages": "94--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Earley. An efficient context-free parsing algorithm. Communications of the ACM, 13(2):94-102, 1970.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bounded context syntactic anal ysis",
"authors": [
{
"first": "R",
"middle": [
"W"
],
"last": "Floyd",
"suffix": ""
}
],
"year": 1964,
"venue": "Communications of the ACM",
"volume": "7",
"issue": "2",
"pages": "62--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.W. Floyd. Bounded context syntactic anal ysis. Communications of the ACM, 7(2):62- 67, 1964.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Incremental parsing",
"authors": [
{
"first": "C",
"middle": [],
"last": "Ghezzi",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mandrioli",
"suffix": ""
}
],
"year": 1979,
"venue": "A CM Transactions on Programming Languages and Systems",
"volume": "1",
"issue": "1",
"pages": "58--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Ghezzi and D. Mandrioli. Incremental parsing. A CM Transactions on Programming Languages and Systems, 1(1):58-70, 1979.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Augmenting parsers to support incrementality",
"authors": [
{
"first": "C",
"middle": [],
"last": "Ghezzi",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mandrioli",
"suffix": ""
}
],
"year": 1980,
"venue": "Journal of the ACM",
"volume": "27",
"issue": "3",
"pages": "564--579",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Ghezzi and D. Mandrioli. Augmenting parsers to support incrementality. Journal of the ACM, 27(3):564-579, 1980.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Introduction to Fo rmal Lan guage Th eory",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Harrison",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M.A. Harrison. Introduction to Fo rmal Lan guage Th eory. Addison-Wesley, 1978.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Incre mental generation of parsers",
"authors": [
{
"first": "J",
"middle": [],
"last": "Heering",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Klint",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Rekers",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the SIGPLAN'89 Confe rence on Program ming Language Design and Implementation",
"volume": "24",
"issue": "",
"pages": "179--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Heering, P. Klint, and J. Rekers. Incre mental generation of parsers. In Proceedings of the SIGPLAN'89 Confe rence on Program ming Language Design and Implementation, pages 179-191, 1989. Appeared as SIGPLAN Notices 24(7).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "GSE: A generic text and structure editor",
"authors": [
{
"first": "J",
"middle": [
"W C"
],
"last": "Koorn",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.W.C. Koorn. GSE: A generic text and structure editor. Programming Research Group, University of Amsterdam, to appear.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Parsing incomplete sentences",
"authors": [
{
"first": "B",
"middle": [],
"last": "Lang",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of the Twelfth In ternational Con ference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "365--371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Lang. Parsing incomplete sentences. In Proceedings of the Twelfth In ternational Con ference on Computational Linguistics, pages 365-371, Budapest, 1988. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Handling of ill-designed grammars in Tomita's parsing algorithm",
"authors": [
{
"first": "R",
"middle": [],
"last": "Nozohoor-Farshi",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the International Parsing Workshop '89",
"volume": "",
"issue": "",
"pages": "182--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Nozohoor-Farshi. Handling of ill-designed grammars in Tomita's parsing algorithm . In Proceedings of the International Parsing Workshop '89, pages 182-192, 1989 .",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Parsing for cyclic grammars. Cen trum voor Wiskunde en Informa.tica (CWI)",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rekers",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Rekers. Parsing for cyclic grammars. Cen trum voor Wiskunde en Informa.tica (CWI), Amsterdam, in preparation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Noncorrecting syntax error re covery",
"authors": [
{
"first": "H",
"middle": [],
"last": "Richter",
"suffix": ""
}
],
"year": 1985,
"venue": "A CM Transactions on Programming Languages and Systems",
"volume": "7",
"issue": "3",
"pages": "478--489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Richter. Noncorrecting syntax error re covery. A CM Transactions on Programming Languages and Systems, 7(3):478-489, 1985.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "How to build LR parsers which accept incomplete input",
"authors": [
{
"first": "G",
"middle": [],
"last": "Snelting",
"suffix": ""
}
],
"year": 1990,
"venue": "SIGPLAN Notices",
"volume": "25",
"issue": "4",
"pages": "51--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Snelting. How to build LR parsers which accept incomplete input. SIGPLAN Notices, 25(4) :51-58, 1990.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Effi cient Parsing for Natural",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Effi cient Parsing for Natural Kluwer Academic Publishers,",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "On \ufffdncremental shift-reduce parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Ye H",
"suffix": ""
}
],
"year": 1983,
"venue": "BIT",
"volume": "23",
"issue": "1",
"pages": "36--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Ye h. On \ufffdncremental shift-reduce parsing. BIT, 23(1):36-48, 1983.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": ". The possible parse trees for a substring s are the parse trees of all sentences vsw for which vsw E L holds. To limit the number of comple tions we allow v and w to consist both of terminals and non-terminals, and we generate a parse tree, 221 START : := Stat START : := Exp Stat ::= if Exp then Stat Stat ::= if Exp then Stat else Stat Stat ::= Id:A completion of \") + 5 t\u2022 hen if\" corresponding to a sentential form o-1 s0-2 , only when the frontier of each of its subtrees cont. ains -at 1 least -one symbol of \u2022s; i.e., we do not gener ,a;te subtrees whose frontier lies -entirel y. within u 1 \u2022or 0-2 . The trees that w.e .generate, are t'he \u2022most general trees, .as it is not possible to replace any of their subtrees by a non-terminal such that the ;frontier ,still contains s , as a substiing. Even so, th-e number of completions can still be infinite. In Section 5:2 we will discuss how to limit this num \u2022 ber still forth er. \u2022 For the grammar of F\ufffdgure 1 -and the string \") + \u20225 \u2022 then \u2022if\" , a possible completion is the sen tential form if \u2022( Exp ) -+ 5 then if ;Exp then Stat s whose parse tree is given in \u2022 Figure 2. To distin guish the leaves of s from those of o-1 and u 2 , th\ufffd former are . underlined."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Generating the completions of a substring LR parsers generate parts of parse trees during ,a reduction step. On reducing A ::= a, the parse stack contains the subtrees created for o:. These are assembled in a new node of type A and the subtree created in this way is pushed on the stack. In the substring parser ordinary reductions are treated in the same way."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "\u2022 Another possible completion of \") + 5 then if\""
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "C\ufffdmparison of the substring recognizer with an ordinary one 1"
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "+----,------,-----.----.....------,-----.-Time needed by the substring parser on Pascal sentences of 100 tokens The required times are indicated in"
},
"TABREF0": {
"text": "s algorithm. This algorithm runs several simple LR parsers in parallel. It starts as a single\u2022 LR parser, but, if it encounters a conflict in the parse table, it splits in as many parsers as there are conflicting possibilities. These indepen dently running sim pie parsers are fully determined by their parse stack. When two parsers have the same state on top of their stack, they are joined in a single parser with a forked stack. A reduce ac tion which goes back over a fork in a parse stack, splits the corresponding parser again into two sep arate parsers. If a parser hits an error entry in the parse table, it is killed by removing it from the set. of act\ufffdve parsers. The possibility to run several parsers in parallel makes the To mita algo rithm very well suited for substring parsing.",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF2": {
"text": "We will show how an individual parser processes an action, but we will not discuss the management of the different parsers, as this is done in the same way as in ordinary To mita parsing. The parser \u2022 . obtains an action from the parse table with the state on top of its stack and with input symbolThe su bstring parser is controlled by the same parse table as our ordinary \u2022parser. To '. generate this parse table we use an extended versio\ufffd of the lazy and incremental parser \u2022 generator IPG[10] . The extension concerns the need of the substring \u2022parser to -know all states which can be reached by a transition under a 'given symbol. This function needs global information about the parse table, which means -that the whole parse table must be known. As a consequence, the lazy aspect of IPG cannot be exploited here and the parse table is al ways fully expanded. The expanded : parse table can also be used by the ordinary parser, of course.",
"type_str": "table",
"content": "<table><tr><td>If there are only I.BI entries on the stack, only f3 has been recognized of A ::= o:.{3; a lies before so and should produce (a part of) a prefix of s 0 \u2022 This is possible, as all non-terminals in o: can produce some ter minal string, and all terminals in o: triv ially do. So the reduction A ::= 0:./3 may be performed. The states which can be reached directly by a transition under A are the states where parsing may continue. For each -of these valid states a new parser is started with that state on the -stack. These parsers . all pr.oceed to. process sk. If there are exactly la.BI P.ntries on the stack, so \u2022 \u2022 \u2022 s1c-1 reduces to 0:./3, but the context in which A is to . be used is un known. This is handled in the same way as the previous case.</td><td/></tr><tr><td>If there are no parsers left alive after the .process ing of s</td><td/></tr><tr><td colspan=\"2\">If we have to determine whether a string so \u2022 \u2022 \u2022 Sn is</td></tr><tr><td colspan=\"2\">a substring of a sentence in a language L, we start the substring recognition process by generating, for each state directly reachable under so , a parser</td></tr><tr><td colspan=\"2\">with this state on its stack. These parsers will process s1 \u2022 \u2022 \u2022 Sn .</td></tr><tr><td>Sk .</td><td>This can be a shift, error or reduce-action,</td></tr><tr><td colspan=\"2\">and is processed in the following manner:</td></tr><tr><td colspan=\"2\">\u2022 A (shift state')-action is processed as in normal parsing: state' is pushed on \ufffdhe stack and the</td></tr><tr><td/><td>parser is ready to process Sk+ i \u2022</td></tr><tr><td colspan=\"2\">\u2022 An ( error )-action removes the parser from the</td></tr><tr><td/><td>set of active parsers.</td></tr><tr><td colspan=\"2\">\u2022 A (reduce A \u2022 ::= o:,B)-action is processed as fol</td></tr><tr><td/><td>lows:</td></tr><tr><td/><td>If there are at least lo:,BI + 1 \u2022 entries on</td></tr><tr><td/><td>the parse stack the reduce action is per</td></tr><tr><td/><td>formed as in normal parsing: lo:,BI entries</td></tr><tr><td/><td>are popped off the stack, and the parse\u2022 ta</td></tr><tr><td/><td>ble is consulted, with the state remaining</td></tr></table>",
"html": null,
"num": null
}
}
}
}