{ "paper_id": "P89-1013", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:14:49.015334Z" }, "title": "SOME CHART-BASED TECHNIQUES FOR PARSING ILL-FORMED INPUT", "authors": [ { "first": "Chris", "middle": [ "S" ], "last": "Mellish", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": { "addrLine": "80 South Bridge", "postCode": "EH1 1HN", "settlement": "EDINBURGH", "country": "Scotland" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We argue for the usefulness of an active chart as the basis of a system that searches for the globally most plausible explanation of failure to syntactically parse a given input. We suggest semantics-free, grammarindependent techniques for parsing inputs displaying simple kinds of ill-formedness and discuss the search issues involved.", "pdf_parse": { "paper_id": "P89-1013", "_pdf_hash": "", "abstract": [ { "text": "We argue for the usefulness of an active chart as the basis of a system that searches for the globally most plausible explanation of failure to syntactically parse a given input. We suggest semantics-free, grammarindependent techniques for parsing inputs displaying simple kinds of ill-formedness and discuss the search issues involved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Although the ultimate solution to the problem of processing ill-formed input must take into account semantic and pragmatic factors, nevertheless it is important to understand the limits of recovery strategies that age based entirely on syntax and which are independent of any particular grammar. The aim of Otis work is therefore to explore purely syntactic and granmmr-independent techniques to enable a to recover from simple kinds of iil-formedness in rex. tual inputs. Accordingly, we present a generalised parsing strategy based on an active chart which is capable of diagnosing simple \u00a2nvrs (unknown/mi.uq~elled words, omitted words, extra noise words) in sentences (from languages described by context free phrase slructur\u00a2 grammars without eproductions). This strategy has the advantage that the recovery process can run after a standard (active chart) parser has terminated unsuccessfully, without causing existing work to be reputed or the original parser to be slowed down in any way, and that, unlike previous systems, it allows the full syntactic context to be exploited in the determination of a \"best\" parse for an ill-formed sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROBLEM", "sec_num": null }, { "text": "EXPLOITING SYNTACTIC CONTEXT Weischedel and Sondheimer (1983) present an approach to processing ill-formed input based on a modified ATN parser. The basic idea is, when an initial p~s@ fails, to select the incomplete parsing path that consumes the longest initial portion of the input, apply a special rule to allow the blocked parse to continue, and then to iterate this process until a successful parse is generated. The result is a \"hillo climbing\" search for the \"best\" parse, relying at each point on the \"longest path\" heuristic. Unfortunately, sometimes this heuristic will yield several possible parses, for instance with the sentence:", "cite_spans": [ { "start": 29, "end": 61, "text": "Weischedel and Sondheimer (1983)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "THE PROBLEM", "sec_num": null }, { "text": "The snow blocks I\" te road (no partial parse getting past the point shown) where the parser can fail expecting either a verb or a determiner. Moreover, sometimes the heuristic will cause the most \"obvious\" error to be missed:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROBLEM", "sec_num": null }, { "text": "He said that the snow the road T The paper will T the best news is the Times where we might suspect that there is a missing verb and a misspelled \"with\" respectively. In all these cases, the \"longest path\" heuristic fails to indicate unambiguously the minimal change that would be necessary to make the whole input acceptable as a sentence. This is not surprising, as the left-fight bias of an ATN parser allows the system to take no account of the right context of a possible problem element.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE PROBLEM", "sec_num": null }, { "text": "Weischedel and Sondheimer's use of the \"longest path\" heuristic is similar to the use of locally least-cost error recovery in Anderson and Backhouse's (1981) scheme for compilers. It seems to be generally accepted that any form of globally \"minimum-distance\" error correction will be too costly to implement (Aho and Ullman, 1977) . Such work has, however, not considered heuristic approaches, such as the one we are developing.", "cite_spans": [ { "start": 126, "end": 157, "text": "Anderson and Backhouse's (1981)", "ref_id": null }, { "start": 308, "end": 330, "text": "(Aho and Ullman, 1977)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "THE PROBLEM", "sec_num": null }, { "text": "Weischedel and Sondheimer's system is the use of grammar-slx~ific recovery rules (\"meta-rules\" in their terminology). The same is true of many other systems for dealing with ill-formed input (e.g. Carhonell and Hayes (1983) , Jensen et al. (1983) ). Although grammarspecific recovery rules are likely in the end always to be more powerful than grammar-independent rules, it does seem to be worth investigating how far one can get with rules that only depend on the grammar forma//sm used. T c\u00b0llects T manure T ff T the T a n t u m n 7 T J , 1 2 3 4 5 6 , In _~.~_pting an ATN parser to compare partial parses, Weischedel and Sondheimer have already introduced machinery to represent several alternative partial parses simultaneously. From this, it is a relatively small step to introduce a well-formed substring table, or even an active chart, which allows for a glohal assessment of the state of the parser. If the grammar form~fi~m is also changed to a declarative formalism (e.g. CF-PSGs, DCGs (Pereira and Warren 1980), patr-ll (Shieber 1984) ), then there is a possibility of constructing other partial parses that do not start at the beginning of the input. In this way, right context can play a role in the determination of the ~est\" parse.", "cite_spans": [ { "start": 197, "end": 223, "text": "Carhonell and Hayes (1983)", "ref_id": null }, { "start": 226, "end": 246, "text": "Jensen et al. (1983)", "ref_id": "BIBREF7" }, { "start": 1210, "end": 1224, "text": "(Shieber 1984)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 489, "end": 573, "text": "T c\u00b0llects T manure T ff T the T a n t u m n 7 T J , 1 2 3 4 5 6 ,", "ref_id": null } ], "eq_spans": [], "section": "Another feature of", "sec_num": null }, { "text": "The information that an active chart parser leaves behind for consideration by a \"post mortem\" obviously depends on the parsing sWategy used (Kay 1980, Gazdar and Mellish 1989) . Act/re edges are particularly important fx~n the point of view of diagnosing errors, as an unsatisfied active edge suggests a place where an input error may have occurred. So we might expect to combine violated expectations with found constituents to hypothesise complete parses. For simplicity, we assume here that the grammar is a simple CF-PSG, although there are obvious generalisations.", "cite_spans": [ { "start": 141, "end": 162, "text": "(Kay 1980, Gazdar and", "ref_id": null }, { "start": 163, "end": 176, "text": "Mellish 1989)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "WHAT A CHART PARSER LEAVES BEHIND", "sec_num": null }, { "text": "(Left-right) top-down pars/ng is guaranteed to create active edges for each kind of phrase that could continue a partial parse starling at the beginning of the input. On the other hand, bottom-up parsing (by which we mean left corner parsing without top-down filtering) is guaranteed to find all complete constimerits of every possible parse. In addition, whenever a non-empty initial segment of a rule RHS has been found, the parser will create active edges for the kind of phrase predicted to occur after this segment. Topdown parsing will always create an edge for a phrase that is needed for a parse, and so it will always indicate by the presence of an unsatisfied active edge the first ester point, if there is one. If a subsequent error is present, top-down parsing will not always create an active edge corresponding to it, because the second may occur within a constituent that will not be predicted until the first error is corrected. Similarly, fight-to-left top-down parsing will always indicate the last error point, and a combination of the two will find the first and last, but not necessarily any error points in between. On the other hand, bottomup parsing will only create an active edge for each error point that comes immediately after a sequence of phrases corresponding to an initial segment of the RI-IS of a grammar rule. Moreover, it will not necessarily refine its predictions to the most detailed level (e.g. having found an NP, it may predict the existence of a following VP, but not the existence of types of phrases that can start a VP). Weisobedel and Sondheimer's approach can be seen as an incremental top-down parsing, where at each stage the rightmost tin.riffled active edge is artificially allowed to be safistied in some way. As we have seen, there is no guarantee that this sort of hill-climbing will find the \"best\" solution for multiple errors, or even for single errors. How can we combine bottom-up and top-down parsing for a more effective solution?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WHAT A CHART PARSER LEAVES BEHIND", "sec_num": null }, { "text": "Our basic stramgy is to run a bottom-up parser over the input and then, if this fails to find a complete parse, to run a modified top-down parser over the resulting chart to hypothesise possible complete parses. The modified top-down parser attempts to find the minimal errors that, when taken account of, enable a complete parse to be constructed. Imagine that a bottom-up parser has already run over the input \"the gardener collects manure if the autumn\". Then Figure 1 shows (informally) how a top-down parser might focus on a possible error. To implement this kind of reasoning, we need a top-down parsing rule that knows how to refine a set of global needs and a fundamental rule that is able m incorporate found constituents from either directim. When we may encounter multiple rotors, however, we need to express multiple needs (e.g. ). We also need to have a fimdamental rule that can absorb found phrases firom anywhere in a relevant portion of the chart (e.g. given a rule \"NP --+ Det Adj N\" and a sequence \"as marvellous sihgt\", we need to be able to hypothesi~ that \"as\" should be a Det and \"sihgt\" a N). To save repealing work, we need a version of the top-down rule that stops when it reaches an appropriate category that has already been found bottom-up. Finally, we need to handle both \"anchored\" and \"unanchored\" needs. In an anchored need (e.g. ) we know the beginning and end of the portion of the chart within which the search is to take place. In looking for a NP VP sequence in \"the happy blageon su'mpled the bait\", however, we can't initially find a complete (initial) NP or (final) VP and hence don't know where in the chart these phrases meeL We express this by , the symbol \"*\" denoting a position in the chart that remains to be determined.", "cite_spans": [], "ref_spans": [ { "start": 463, "end": 471, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "FOCUSING ON AN ERROR", "sec_num": null }, { "text": "If we adopt a chart parsing suategy with only edges that carry informafim about global needs, thee will be considerable dupficated effort. For instance, the further refinement of the two edges: can lead to any analysis of possible NPs between 0 and 3 being done twice. Restricting the possible format of edges in this way would be similar to allowing the \"functional composition rule\" (Steedman 1987) in standard chart parsing, and in general this is not done for efficiency reasons. Instead, we need to produce a single edge that is \"in charge\" of the computation looking for NPs between 0 and 3. When poss\u00a3ole NPs are then found, these then need to be combined with the original edges by an appropriate form of the fundamental rule. We are thus led to the following form for a generalised edge in our chart parser:.", "cite_spans": [ { "start": 469, "end": 484, "text": "(Steedman 1987)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "GENERALISED TOP-DOWN PARSING", "sec_num": null }, { "text": ", where n is the final position in the chart. At any point the fundamental rule is run as much as possible. When we can proceed no further, the first need is refined by the top-down rule (hopefully search now being anchored). The fundamental rule may well again apply, taking account of smaller phrases that have already been found. When this has run, the top-down rule may then further refine the system's expectations about the parts of the phrase that cannot be found. And so on. This is just the kind of \"focusing\" that we discussed in the last section.. If an edge expresses needs in several separate places, the first will eventually get resolved, the simplification rule will then apply and the rest of the needs will then be worked on.", "cite_spans": [], "ref_spans": [ { "start": 139, "end": 147, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "GENERALISED TOP-DOWN PARSING", "sec_num": null }, { "text": "For this all to make sense, we must assume that all hypothesised needs can eventually be resolved (otherwise the rules do not suffice for more than one error to be narrowed down). We can ensure this by introducing special rules for recoguising the most primitive kinds of errors. The results of these rules must obviously be scored in some way, so that errors are not wildly hypothesised in all sorts of places.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GENERALISED TOP-DOWN PARSING", "sec_num": null }, { "text": " from the front of the agenda, adding this to the chart and then adding new edges to the agenda, as follows. Ftrst of all, for each edge of the form in the chart the fundamental rule determines that should be added. Secondly, for each rule NP -.., 7 in the grammar the top-down rule determines that should be added. With generalised top-down parsing, there are more rules to be considered, but the idea is the same. Actually, for the top-down rule our implementation schedules a whole collection of single additions (\"apply the top down rule to edge a\") as a single item on the agenda. When such a request reaches the front of the queue, the actual new edges are then computed and themselves added to the agenda. The result of this strategy is to make the agenda smaller but more structured, at the cost of some extra cycles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PRELIMINARY EXPERIMENTS", "sec_num": null }, { "text": "The preliminary results show that, for small sentences and only one error, enumerating all the possible minimum-penalty errors takes no worse than 10 times as long as parsing the correct sentences. Finding the first minimal-penalty error can also be quite fast. There is, however, a great variability between the types of error. Errors involving completely unknown words can be diagnosed reasonably quickly because the presence of an unknown word allows the estimation of penalty scores to be quite accurate (the system still has to work out whether the word can be an addition and for what categories it can substitute for an instance of, however). We have not yet considered multiple errors in a sentence, and we can expect the behaviour to worsten dramatically as the number of errors increases. Although Table 1 does not show this, there is also a great deal of variability between sentences of the same length with the same kind of introduced error. It is noticeable that errors towards the end of a sentence are harder to diagnose than those at the start. This reflects the leRfight orientation of the parsing rules -an attempt to find phrases starting to the right of an error will have a PBG score at least one more than the estimated PB, whereas an attempt m find phrases in an open-ended portion of the chart starting before an error may have a PBG score the same as the PB (as the error may occur within the phrases to be found). Thus more parsing attempts will be relegated to the lower parts of the agenda in the first case than in the second.", "cite_spans": [], "ref_spans": [ { "start": 808, "end": 815, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "EVALUATION AND FUTURE WORK", "sec_num": null }, { "text": "One disturbing fact about the statistics is that the number of minimal-penalty solutions may be quite large. For instance, the ill-formed sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EVALUATION AND FUTURE WORK", "sec_num": null }, { "text": "who has John seen on that had was formed by adding the extra word \"had\" to the sentence \"who has John seen on that\". Our parser found three other possible single errors to account for the sentence. The word \"on\" could have been an added word, the word \"on\" could have been a substitution for a complementiser and there could have been a missing NP after \"on\". This large number of solutions could be an artefact of our particular gramram\" and lexicon; certainly it is unclear how one should choose between possible solutions in a grammar-independent way. In a few cases, the introduction of a random error actually produced a grammatical sentence -this occurred, for instance, twice with sentences of length 5 given one random A__dded word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EVALUATION AND FUTURE WORK", "sec_num": null }, { "text": "At this stage, we cannot claim that our experiments have done anything more than indicate a certain concreteness to the ideas and point to a number of unresolved problems. It remains to be seen how the performance will scale up for a realistic grammar and parser. There are a number of detailed issues to resolve before a really practical implementation of the above ideas can be produced. The indexing strategy of the chart needs to be altered to take into account the new parsing rules, and remaining problems of duplication of effort need to be addressed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EVALUATION AND FUTURE WORK", "sec_num": null }, { "text": "For instance, the generalised version of the fundamental rule allows an active edge to combine with a set of inactive edges satisfying its needs in any order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EVALUATION AND FUTURE WORK", "sec_num": null }, { "text": "The scoring of errors is another ar~ which should be better investigated. Where extra words are introduced accidentally into a text, in practice they are perhaps unlikely to be words that are already in the lexicon. Thus when we gave our system sentences with known words added, this may not have been a fair test. Perhaps the scoring system should prefer added words to be words outside the lexicon, substituted words to substitute for words in open categories, deleted words to be non-content words, and so on. Perhaps also the confidence of the system about possible substitutions could take into account whether a standard spelling corrector can rewrite the acnmi word to a known word of the hypothesised category. A more sophisticated error scoring strategy could improve the system's behaviour considerably for real examples (it might of course make less difference for random examples like the ones in our experiments).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EVALUATION AND FUTURE WORK", "sec_num": null }, { "text": "Finally the behaviour of the approach with realistic grammars written in more expressive notations needs to be established. At present, we are investigating whether any of the current ideas can be used in conjunction with Allport's (1988) \"interesting corner\" parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EVALUATION AND FUTURE WORK", "sec_num": null } ], "back_matter": [ { "text": "This work was done in conjunction with the SERC-supported project GR/D/16130. I am currently supported by an SERC Advanced Fellowship.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ACKNOWLEDGEMENTS", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Principles of Compiler Design", "authors": [ { "first": "Alfred", "middle": [ "V" ], "last": "Aho", "suffix": "" }, { "first": "Jeffrey", "middle": [ "D" ], "last": "Ullman", "suffix": "" } ], "year": 1977, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aho, Alfred V. and Ullman, Jeffrey D. 1977 Princi- ples of Compiler Design. Addison-Wesley.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The TICC: Parsing Interesting Text", "authors": [ { "first": "Allpo~", "middle": [], "last": "David", "suffix": "" } ], "year": 1988, "venue": "Proceedings of the Second ACL Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Allpo~ David. 1988 The TICC: Parsing Interesting Text. In: Proceedings of the Second ACL Conference on Applied Natural Language Processing, Austin, Texas.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Natura/ Language Processing in LISP -An Introduction to Computational Linguistics", "authors": [ { "first": "Gerald", "middle": [], "last": "Gazdar", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Mellish", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gazdar, Gerald and Mellish, Chris. 1989 Natura/ Language Processing in LISP -An Intro- duction to Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Parse Fitting and Prose Fitting: Getting a Hold on Ill", "authors": [ { "first": "Karen", "middle": [], "last": "Jensen", "suffix": "" }, { "first": "George", "middle": [ "E" ], "last": "Heidom", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" }, { "first": "A", "middle": [], "last": "Lance", "suffix": "" }, { "first": "Yael", "middle": [], "last": "Ravin", "suffix": "" } ], "year": 1983, "venue": "", "volume": "9", "issue": "", "pages": "147--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jensen, Karen, Heidom, George E., Miller, Lance A. and Ravin, Yael. 1983 Parse Fitting and Prose Fitting: Getting a Hold on Ill. Formedness. A/C/, 9(3-4): 147-160.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Algorithm Schemata and Data Structures in Syntactic Processing", "authors": [ { "first": "Matin", "middle": [], "last": "Kay", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kay, Matin. 1980 Algorithm Schemata and Data Structures in Syntactic Processing. Research Report CSL-80-12, Xerox PARC.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Definite Clause Grammars for Language Analysis -A Survey of the Formalism and a Comparison with Augmented Transition Networks", "authors": [ { "first": "Pereir&", "middle": [], "last": "Fernando", "suffix": "" }, { "first": "C", "middle": [ "N" ], "last": "Warren", "suffix": "" }, { "first": "David I-L D", "middle": [], "last": "", "suffix": "" } ], "year": 1980, "venue": "Artifu:ial Intelligence", "volume": "13", "issue": "3", "pages": "231--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pereir& Fernando C. N. and Warren, David I-L D. 1980 Definite Clause Grammars for Language Analysis -A Survey of the For- malism and a Comparison with Augmented Transition Networks. Artifu:ial Intelli- gence 13(3): 231-278.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Design of a Computer Language for Linguistic Information", "authors": [ { "first": "Stuart", "middle": [ "M" ], "last": "Shieber", "suffix": "" } ], "year": 1984, "venue": "Proceedings of COLING-84", "volume": "", "issue": "", "pages": "362--366", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shieber, Stuart M. 1984 The Design of a Computer Language for Linguistic Information. In Proceedings of COLING-84, 362-366.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "in Knowledge Representation and Natural Language Proceasing", "authors": [ { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steedman, Mark. 1987 Combinatow Grammars and Human Language ~g. In: Garfield, J., Ed., Modularity in Knowledge Representation and Natural Language Pro- ceasing. Bradford Books/MIT Press.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Meta-rules as a Basis for ~g HI-Formed Input", "authors": [ { "first": "Ralph", "middle": [ "M" ], "last": "Weischedel", "suffix": "" }, { "first": "K", "middle": [], "last": "Norman", "suffix": "" } ], "year": 1983, "venue": "AICL", "volume": "9", "issue": "3-4", "pages": "161--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weischedel, Ralph M. and 5ondheimer. Norman K. 1983 Meta-rules as a Basis for ~g HI-Formed Input. AICL 9(3-4): 161-177.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Optimal Search Strategies for Speech Understanding Control", "authors": [ { "first": "Williant", "middle": [ "A" ], "last": "Woods", "suffix": "" } ], "year": 1982, "venue": "Artificial Intelligence", "volume": "18", "issue": "3", "pages": "295--326", "other_ids": {}, "num": null, "urls": [], "raw_text": "Woods, Williant A. 1982 Optimal Search Strategies for Speech Understanding Control. Artificial Intelligence 18(3): 295-326.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "text": "rule with NP found bottom-up) (by top-down rule) (by fundamental rule with VP found bottom-up) (by top-down rule) (by fundamental rule with NP found bottom-up) Focusing on an emx.", "uris": null }, "FIGREF1": { "type_str": "figure", "num": null, "text": "from $. to e,>", "uris": null }, "FIGREF2": { "type_str": "figure", "num": null, "text": "Generalised Top-down Parsing Rules SEARCH CONTROL AND EVALUATION FUNCTIONS", "uris": null }, "TABREF0": { "html": null, "text": "from sl to e:, cs2 fzom s2 to e2 .... cs. from s. toe.> from sl toe needs RHS from sx toe> where e = ff csl is not empty or e 1 ffi * then * else e x (el = * or CSl is non-empty or there is no category cl from sl to e:)", "content": "
Fundamental rule:
<C from S mE needs [...cs n c l ...cs n] from s l to e x, cs 2 ...>
<c ~ from S ~ to El needs <nothing>>
<C fxom S toe needs csn from sx to S t, csx2 fxom E t to el, cs2 ...>
(sl < Sx, el = * or El < e:)
Simplification rule:
<C fxom S toE needs ~ from s to s, c$2 from s2 to e2, ... cs. from s. me,,>
<C from S toe needs cs2 from s2 to e2, ... cs. fxom s. toe.>
Garbage rule:
Empty category rule:
<C from S toE needs [cl...csl] from s to s, cs2 from s2 to e2 .... ca. from s. toe.>
<C fxom S toE needs cs2 from s2 to e2. ... cs. f~om s, toe,>
", "type_str": "table", "num": null } } } }