| { |
| "paper_id": "P04-1029", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:43:46.223993Z" |
| }, |
| "title": "Optimizing Typed Feature Structure Grammar Parsing through Non-Statistical Indexing", |
| "authors": [ |
| { |
| "first": "Cosmin", |
| "middle": [], |
| "last": "Munteanu", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Gerald", |
| "middle": [], |
| "last": "Penn", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "gpenn\u00a1@cs.toronto.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper introduces an indexing method based on static analysis of grammar rules and type signatures for typed feature structure grammars (TFSGs). The static analysis tries to predict at compile-time which feature paths will cause unification failure during parsing at run-time. To support the static analysis, we introduce a new classification of the instances of variables used in TFSGs, based on what type of structure sharing they create. The indexing actions that can be performed during parsing are also enumerated. Non-statistical indexing has the advantage of not requiring training, and, as the evaluation using large-scale HPSGs demonstrates, the improvements are comparable with those of statistical optimizations. Such statistical optimizations rely on data collected during training, and their performance does not always compensate for the training costs.", |
| "pdf_parse": { |
| "paper_id": "P04-1029", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper introduces an indexing method based on static analysis of grammar rules and type signatures for typed feature structure grammars (TFSGs). The static analysis tries to predict at compile-time which feature paths will cause unification failure during parsing at run-time. To support the static analysis, we introduce a new classification of the instances of variables used in TFSGs, based on what type of structure sharing they create. The indexing actions that can be performed during parsing are also enumerated. Non-statistical indexing has the advantage of not requiring training, and, as the evaluation using large-scale HPSGs demonstrates, the improvements are comparable with those of statistical optimizations. Such statistical optimizations rely on data collected during training, and their performance does not always compensate for the training costs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Developing efficient all-paths parsers has been a long-standing goal of research in computational linguistics. One particular class still in need of parsing time improvements is that of TFSGs. While simpler formalisms such as context-free grammars (CFGs) also face slow all-paths parsing times when the size of the grammar increases significantly, TF-SGs (which generally have fewer rules than largescale CFGs) become slow as a result of the complex structures used to describe the grammatical categories. In HPSGs (Pollard and Sag, 1994) , one category description could contain hundreds of feature values. This has been a barrier in transferring CFGsuccessful techniques to TFSG parsing.", |
| "cite_spans": [ |
| { |
| "start": 515, |
| "end": 538, |
| "text": "(Pollard and Sag, 1994)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For TFSG chart parsers, one of the most timeconsuming operations is the retrieval of categories from the chart during rule completion (closing of constituents in the chart under a grammar rule). Looking in the chart for a matching edge for a daughter is accomplished by attempting unifications with edges stored in the chart, resulting in many failed unifications. The large and complex structure of TFS descriptions (Carpenter, 1992) leads to slow unification times, affecting the parsing times. Thus, failing unifications must be avoided during retrieval from the chart.", |
| "cite_spans": [ |
| { |
| "start": 417, |
| "end": 434, |
| "text": "(Carpenter, 1992)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To our knowledge, there have been only four methods proposed for improving the retrieval component of TFSG parsing. One (Penn and Munteanu, 2003) addresses only the cost of copying large categories, and was found to reduce parsing times by an average of 25% on a large-scale TFSG (MERGE). The second, a statistical method known as quickcheck (Malouf et al., 2000) , determines the paths that are likely to cause unification failure by profiling a large sequence of parses over representative input, and then filters unifications at run-time by first testing these paths for type consistency. This was measured as providing up to a 50% improvement in parse times on the English Resource Grammar (Flickinger, 1999, ERG) . The third (Penn, 1999b ) is a similar but more conservative approach that uses the profile to re-order sister feature values in the internal data structure. This was found to improve parse times on the ALE HPSG by up to 33%.", |
| "cite_spans": [ |
| { |
| "start": 120, |
| "end": 145, |
| "text": "(Penn and Munteanu, 2003)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 342, |
| "end": 363, |
| "text": "(Malouf et al., 2000)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 694, |
| "end": 717, |
| "text": "(Flickinger, 1999, ERG)", |
| "ref_id": null |
| }, |
| { |
| "start": 730, |
| "end": 742, |
| "text": "(Penn, 1999b", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The problem with these statistical methods is that the improvements in parsing times may not justify the time spent on profiling, particularly during grammar development. The static analysis method introduced here does not use profiling, although it does not preclude it either. Indeed, an evaluation of statistical methods would be more relevant if measured on top of an adequate extent of non-statistical optimizations. Although quick-check is thought to produce parsing time improvements, its evaluation used a parser with only a superficial static analysis of chart indexing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "That analysis, rule filtering (Kiefer et al., 1999) , reduces parse times by filtering out mother-daughter unifications that can be determined to fail at compile-time. True indexing organizes the data (in this case, chart edges) to avoid unnecessary retrievals altogether, does not require the operations that it performs to be repeated once full unification is deemed necessary, and offers the support for easily adding information extracted from further static analysis of the grammar rules, while maintaining the same indexing strategy. Flexibility is one of the reasons for the successful employment of indexing in databases (Elmasri and Navathe, 2000) and automated reasoning (Ramakrishnan et al., 2001) .", |
| "cite_spans": [ |
| { |
| "start": 30, |
| "end": 51, |
| "text": "(Kiefer et al., 1999)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 629, |
| "end": 656, |
| "text": "(Elmasri and Navathe, 2000)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 681, |
| "end": 708, |
| "text": "(Ramakrishnan et al., 2001)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we present a general scheme for indexing TFS categories during parsing (Section 3). We then present a specific method for statically analyzing TFSGs based on the type signature and the structure of category descriptions in the grammar rules, and prove its soundness and completeness (Section 4.2.1). We describe a specific indexing strategy based on this analysis (Section 4), and evaluate it on two large-scale TFSGs (Section 5). The result is a purely non-statistical method that is competitive with the improvements gained by statistical optimizations, and is still compatible with further statistical improvements.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "TFSs are used as formal representatives of rich grammatical categories. In this paper, the formalism from (Carpenter, 1992) will be used. A TFSG is defined relative to a fixed set of types and set of features, along with constraints, called appropriateness conditions. These are collectively known as the type signature ( Figure 3 ). For each type, appropriateness specifies all and only the features that must have values defined in TFSs of that type. It also specifies the types of the values that those features can take. The set of types is partially ordered, and has a unique most general type ( -\"bottom\"). This order is called subsumption (\u00a1 ): more specific (higher) types inherit appropriate features from their more general (lower) supertypes. Two types t 1 and t 2 unify (t 1 \u00a2 t 2 \u00a3 ) iff they have a least upper bound in the hierarchy. Besides a type signature, TFSGs contain a set of grammar (phrase) rules and lexical descriptions. A simple example of a lexical description is: john\u00a4 \u00a6 \u00a5 SYNSEM : \u00a7 SYN : np\u00a8SEM : j\u00a9 , while an example of a phrase rule is given in Figure 1 . \u00a7 SYN : s\u00a8SEM : ", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 123, |
| "text": "(Carpenter, 1992)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 322, |
| "end": 330, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 1080, |
| "end": 1088, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "TFSG Terminology", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u00a7 V PSem AGENT : NPSem\u00a9 \u00a9 \u00a4 \u00a5 \u00a7 SYN : np\u00a8AGR : Agr\u00a8SEM : NPSem\u00a9 , \u00a7 SYN : vp\u00a8AGR : Agr\u00a8SEM : V PSem\u00a9 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "TFSG Terminology", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A TFS ( Figure 2 ) is like a recursively defined record in a programming language: it has a type and features with values that can be TFSs, all obeying the appropriateness conditions of the type signature. TFSs can also be seen as rooted graphs, where arcs correspond to features and nodes to substructures. A node typing function \u03b8 \u00a7 q\u00a9 associates a type to every node q in a TFS. Every TFS F has a unique starting or root node, q F . For a given TFS, the feature value partial function \u03b4 \u00a7 f q\u00a9 specifies the node reachable from q by feature f when one exists. The path value partial function \u03b4 \u00a7 \u03c0 q\u00a9 specifies the node reachable from q by following a path of features \u03c0 when one exists. TFSs can be unified as well. The result represents the most general consistent combination of the information from two TFSs. That information includes typing (by unifying the types), feature values (by recursive unification), and structure sharing (by an equivalence closure taken over the nodes of the arguments). For large TFSs, unification is computationally expensive, since all the nodes of the two TFSs are visited. In this process, many nodes are collapsed into equivalence classes because of structure sharing. A node x in a TFS F with root q F and a node x in a TFS F with root q F are equivalent ( Figure 3 : A type signature. For each type, appropriateness declares the features that must be defined on TFSs of that type, along with the type restrictions applying to their values.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 8, |
| "end": 16, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 1299, |
| "end": 1307, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Typed Feature Structures", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": ") with respect to F \u00a2 F iff x \u00a4 q F and x \u00a4 q F , or if there is a path \u03c0 such that \u03b4 F F \u00a7 \u03c0 q F \u00a9 \u00a4 x and \u03b4 F F \u00a7 \u03c0 q F \u00a9 \u00a4 x .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Typed Feature Structures", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "TFSGs are typically specified using descriptions, which logically denote sets of TFSs. Descriptions can be more terse because they can assume all of the information about their TFSs that can be inferred from appropriateness. Each non-disjunctive description can be associated with a unique most general feature structure in its denotation called a most general satisfier (MGSat). While a formal presentation can be found in (Carpenter, 1992) , we limit ourselves to an intuitive example: the TFS from Figure 2 is the MGSat of the description: throwing\u00a8THROWER :", |
| "cite_spans": [ |
| { |
| "start": 424, |
| "end": 441, |
| "text": "(Carpenter, 1992)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 501, |
| "end": 509, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Structure Sharing in Descriptions", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u00a7 PERSON : third\u00a8NUMBER : \u00a7 singular\u00a8Nr\u00a9 GENDER : masculine\u00a9\u00a8THROWN : \u00a7 PERSON : third\u00a8NUMBER : Nr GENDER : neuter\u00a9 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Sharing in Descriptions", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Descriptions can also contain variables, such as Nr.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Sharing in Descriptions", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Structure sharing is enforced in descriptions through the use of variables. In TFSGs, the scope of a variable extends beyond a single description, resulting in structure sharing between different TFSs. In phrase structure rules (Figure 1 ), this sharing can occur between different daughter categories in a rule, or between a mother and a daughter. Unless the term description is explicitly used, we will use \"mother\" and \"daughter\" to refer to the MGSat of a mother or daughter description.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 228, |
| "end": 237, |
| "text": "(Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Structure Sharing in Descriptions", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We can classify instances of variables based on what type of structure sharing they create. Internal variables are the variables that represent internal structure sharing (such as in Figure 2 ). The occurrences of such variables are limited to a single category in a phrase structure rule. External variables are the variables used to share structure between categories. If a variable is used for structure sharing both inside a category and across categories, then it is also considered an external variable. For a specific category, two kinds of external variable instances can be distinguished, depending on their occurrence relative to the parsing control strategy: active external variables and inactive external variables. Active external variables are instances of external variables that are shared between the description of a category D and one or more descriptions of categories in the same rule as D visited by the parser before D as the rule is extended (completed). Inactive external variables are the external variable instances that are not active. For example, in bottom-up left-to-right parsing, all of a mother's external variable instances would be active because, being external, they also occur in one of the daughter descriptions. Similarly, all of the leftmost daughter's external variable instances would be inactive because this is the first description used by the parser. In Figure 1 , Agr is an active external variable in the second daughter, but it is inactive in the first daughter.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 183, |
| "end": 191, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 1403, |
| "end": 1411, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Structure Sharing in Descriptions", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The active external variable instances are important for path indexing (Section 4.2), because they represent the points at which the parser must copy structure between TFSs. They are therefore substructures that must be provided to a rule by the parsing chart if these unifications could potentially fail. They also represent shared nodes in the MGSats of a rule's category descriptions. In our definitions, we assume without loss of generality that parsing proceeds bottom-up, with left-to-right of rule daughters. This is the ALE system's (Carpenter and Penn, 1996) ", |
| "cite_spans": [ |
| { |
| "start": 541, |
| "end": 567, |
| "text": "(Carpenter and Penn, 1996)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Sharing in Descriptions", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "parsing strategy. Definition 1. If D 1 \u00a1 \u00a2 \u00a1 \u00a2 \u00a1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Sharing in Descriptions", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "D n are daughter descriptions in a rule and the rules are extended from left to right, then Ext", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Sharing in Descriptions", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u00a7 M GSat \u00a7 D i \u00a9 \u00a9 is the set of nodes shared between MGSat \u00a7 D i \u00a9 and MGSat \u00a7 D 1 \u00a9 \u00a1 \u00a2 \u00a1 \u00a2 \u00a1 MGSat \u00a7 D i \u00a3 1 \u00a9 . For a mother de- scription M, Ext \u00a7 MGSat \u00a7 M\u00a9 \u00a9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Sharing in Descriptions", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "is the set of nodes shared with any daughter in the same rule.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Sharing in Descriptions", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Because the completion of TFSG rules can cause the categories to change in structure (due to external variable sharing), we need some extra notation to refer to a phrase structure rule's categories at different times during a single application of that rule. By ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Sharing in Descriptions", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "M: if q M is M's root and \u00a5 q M is \u00a4 M's root, then \u00a6 x \u00a7 M \u00a6 \u00a4 x \u00a7 \u00a4 M such that\u00a8\u03c0 for which \u03b4 \u00a7 \u03c0 q M \u00a9 \u00a4 x and \u03b4 \u00a7 \u03c0 \u00a5 q M \u00a9 \u00a4 \u00a4 x, \u03b8 \u00a7 x\u00a9 \u00a9 \u03b8 \u00a7 \u00a4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Sharing in Descriptions", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "x\u00a9 . In other words, extending the rule extends the information states of its categories monotonically. A similar relation exists between D and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Sharing in Descriptions", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u00a4 D. The set of all nodes x in M such that\u00a8\u03c0 for which \u03b4 \u00a7 \u03c0 q M \u00a9 \u00a4 x and \u03b4 \u00a7 \u03c0 \u00a5 q M \u00a9 \u00a4 \u00a4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Sharing in Descriptions", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "x will be denoted by \u00a4 x \u00a3 1 (and likewise for nodes in D). There may be more than one node in \u00a4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Sharing in Descriptions", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "x \u00a3 1 because of unifications that occur during the extension of M to \u00a4 M.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Sharing in Descriptions", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Indexing can be applied at several moments during parsing. We introduce a general strategy for indexed parsing, with respect to what actions should be taken at each stage.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Indexing Timeline", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Three main stages can be identified. The first one consists of indexing actions that can be taken off-line (along with other optimizations that can be performed at compile-time). The second and third stages refer to actions performed at run time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Indexing Timeline", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Stage 1. In the off-line phase, a static analysis of grammar rules can be performed. The complete content of mothers and daughters may not be accessible, due to variables that will be instantiated during parsing, but various sources of information, such as the type signature, appropriateness specifications, and the types and features of mother and daughter descriptions, can be analyzed and an appropriate indexing scheme can be specified. This phase of indexing may include determining: (1a) which daughters in which rules will certainly not unify with a specific mother, and (1b) what information can be extracted from categories during parsing that can constitute indexing keys. It is desirable to perform as much analysis as possible off-line, since the cost of any action taken during run time prolongs the parsing time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Indexing Timeline", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Stage 2. During parsing, after a rule has been completed, all variables in the mother have been extended as far as they can be before insertion into the chart. This offers the possibility of further investigating the mother's content and extracting supplemental information from the mother that contributes to the indexing keys. However, the choice of such investigative actions must be carefully studied, since it might burden the parsing process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Indexing Timeline", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Stage 3. While completing a rule, for each daughter a matching edge is searched in the chart. At this moment, the daughter's active external variables have been extended as far as they can be before unification with a chart edge. The information identified in stage (1b) can be extracted and unified as a precursor to the remaining steps involved in category unification. These steps also take place at this stage.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Indexing Timeline", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To reduce the time spent on failures when searching for an edge in the chart, each edge (edge's category) has an associated index key which uniquely identifies the set of daughter categories that can potentially match it. When completing a rule, edges unifying with a specific daughter are searched for in the chart. Instead of visiting all edges in the chart, the daughter's index key selects a restricted number of edges for traversal, thus reducing the number of unification attempts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "TFSG Indexing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The passive edges added to the chart represent specializations of rules' mothers. When a rule is completed, its mother M is added to the chart according to M's indexing scheme, which is the set of index keys of daughters that might possibly unify with M. The index is implemented as a hash, where the hash function applied to a daughter yields the daughter's index key (a selection of chart edges). For a passive edge representing M, M's indexing scheme provides the collection of hash entries where it will be added.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "TFSG Indexing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Each daughter is associated with a unique index key. During parsing, a specific daughter is searched for in the chart by visiting only those edges that have a matching key, thus reducing the time needed for traversing the chart. The index keys can be computed off-line (when daughters are indexed by position), or during parsing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "TFSG Indexing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In positional indexing, the index key for each daughter is represented by its position (rule number and daughter position in the rule). The structure of the index can be determined at compile-time (first stage).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Positional Indexing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For each mother M in the grammar, a collection", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Positional Indexing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "L \u00a7 M\u00a9 \u00a4 \u00a1 \u00a7 R i D j \u00a9 \u00a3 \u00a2 d aughters that can match M \u00a4 is created (M's indexing scheme), where each element of L \u00a7 M\u00a9 represents the rule number R i and daughter position D j inside rule R i (1 \u00a5 j \u00a5 arity \u00a7 R i \u00a9 ) of a category that can match with M.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Positional Indexing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For TFSGs it is not possible to compute off-line the exact list of mother-daughter matching pairs, but it is possible to rule out certain non-unifiable pairs before parsing -a compromise that pays off with a very low index management time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Positional Indexing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "During parsing, each time an edge (representing a rule's mother M) is added to the chart, it is inserted into the hash entries associated with the positions (and consequently, indexed) . We used the EFD-based modification of this algorithm (Penn and Munteanu, 2003) , which needs no active edges, and requires a constant two copies per edges, rather than the standard one copy per retrieval found in Prolog parsers. Without this, the cost of copying TFS categories would have overwhelmed the benefit of the index.", |
| "cite_spans": [ |
| { |
| "start": 240, |
| "end": 265, |
| "text": "(Penn and Munteanu, 2003)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 157, |
| "end": 184, |
| "text": "(and consequently, indexed)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Positional Indexing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u00a7 R i D j \u00a9 from the list L \u00a7 M \u00a9 (the number of entries where M is inserted is \u00a2 L \u00a7 M\u00a9 \u00a3 \u00a2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Positional Indexing", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Path indexing is an extension of positional indexing. Although it shares the same underlying principle as the path indexing used in automated reasoning (Ramakrishnan et al., 2001) , its functionality is related to quick check: extract a vector of types from a mother (which will become an edge) and a daughter, and test the unification of the two vectors before attempting to unify the edge and the daughter. Path indexing differs from quick-check in that it identifies these paths by a static analysis of grammar rules, performed off-line and with no training required. Path indexing is also built on top of positional indexing, therefore the vector of types can be different for each potentially unifiable motherdaughter pair.", |
| "cite_spans": [ |
| { |
| "start": 152, |
| "end": 179, |
| "text": "(Ramakrishnan et al., 2001)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Path Indexing", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Similar to the abstract interpretation used in program verification (Cousot and Cousot, 1992) , the static analysis tries to predict a run-time phenomenon (specifically, unification failures) at compile-time. It tries to identify nodes in a mother that carry no relevant information with respect to unification with a particular daughter. For a mother M unifiable with a daughter D, these nodes will be grouped in a set StaticCut \u00a7 M D\u00a9 . Intuitively, these nodes can be left out or ignored while computing the unification of ", |
| "cite_spans": [ |
| { |
| "start": 68, |
| "end": 93, |
| "text": "(Cousot and Cousot, 1992)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "\u00a6 y \u00a7 D for which x y, \u00a6 s \u00a3 \u03b8 \u00a7 x\u00a9 \u00a6 t \u00a3 \u03b8 \u00a7 y\u00a9 , s \u00a2 t exists.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "In words, a node can be left out even if it is externally shared (or has an externally shared ancestor) if all possible types this node can have unify with all possible types its corresponding nodes in D can have. Due to structure sharing, the types of nodes in M and D can change during parsing, by being specialized to one of their subtypes. Condition 2 ensures that the types of these nodes will remain compatible (have a least upper bound), even if they specialize during rule completion. An intuitive example (real-life examples cannot be reproduced here -a category in a typical TFSG can have hundreds of nodes) is presented in Figure 4 . ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 634, |
| "end": 642, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "\u00a5 RigidCut \u00a6 M \u00a4 D \u00a7 , while x 2 \u00a5 VariableCut \u00a6 M \u00a4 D \u00a7 (\u03b8\u00a6 y 2 \u00a7", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "can promote only to t 7 , thus x 2 and y 2 will always be compatible). x 4 is not included in the StaticCut, because if \u03b8\u00a6 y 5 \u00a7 promotes to t 5 , then \u03b8\u00a6 y 4 \u00a7 will promote to t 5 (not unifiable with t 3 ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "When computing the unification between a mother and a daughter during parsing, the same outcome (success or failure) will be reached by using a reduced representation of the mother ( ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "\u00a4 M s D \u00a2 \u00a4 D \u00a3 \u00a4 \u00a6 \u00a5 \u00a4 M \u00a2 \u00a4 D \u00a3 , (2) \u00a4 M s D \u00a2 \u00a4 D\u00a8\u00a4 \u00a6 \u00a5 \u00a4 M \u00a2 \u00a4 D\u00a8. Proof. The second part ( \u00a4 M s D \u00a2 \u00a4 D\u00a8\u00a4 \u00a6 \u00a5 \u00a4 M \u00a2 \u00a4 D\u00a8) of Proposition 1 has a straightforward proof: if \u00a4 M s D \u00a2 \u00a4 D\u00a8, then\u00a8\u00a4 z \u00a7 \u00a4 M s D \u00a4 D such that \u00a9 t for which \u00a6 \u00a4 x \u00a7 \u00a4 z t \u00a3 \u03b8 \u00a7 \u00a4 x\u00a9 . Since \u00a4 M s D \u00a4 M,\u00a8\u00a4 z \u00a7 \u00a4 M \u00a4 D such that \u00a9 t for which \u00a6 \u00a4 x \u00a7 \u00a4 z t \u00a3 \u03b8 \u00a7 \u00a4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "x\u00a9 , and therefore, Three lemmata need to be formulated:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Lemma 1. If \u00a4 x \u00a7 \u00a4 M and x \u00a7 \u00a4 x \u00a3 1 , then \u03b8 \u00a7 \u00a4 x\u00a9 \u00a3 \u03b8 \u00a7 x\u00a9 . Similarly, for \u00a4 y \u00a7 \u00a4 D, y \u00a7 \u00a4 y \u00a3 1 , \u03b8 \u00a7 \u00a4 y\u00a9 \u00a3 \u03b8 \u00a7 y\u00a9 . Lemma 2. If types t 0 t 1 \u00a1 \u00a2 \u00a1 \u00a2 \u00a1 t n are such that \u00a6 t 0 \u00a3 t 0 \u00a6 i \u00a7 \u00a7 1 \u00a1 \u00a2 \u00a1 \u00a2 \u00a1 n\u00a9 , t 0 \u00a2 t i \u00a3 , then\u00a8t \u00a3 t 0 such that \u00a6 i \u00a7 \u00a7 1 \u00a1 \u00a2 \u00a1 \u00a2 \u00a1 n\u00a9 , t \u00a3 t i . Lemma 3. If \u00a4 x \u00a7 \u00a4 M and \u00a4 y \u00a7 \u00a4 D for which \u00a4 x \u00a4 y, then x \u00a7 \u00a4 x \u00a3 1 y \u00a7 \u00a4 y \u00a3 1 such that x y.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "In proving the first part of Proposition 1, four cases are identified: Case A:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "\u00a2 \u00a4 z \u00a1 \u00a4 M \u00a2\u00a4 1 and \u00a2 \u00a4 z \u00a1 \u00a4 D\u00a2 \u00a6 \u00a4 1, Case B: \u00a2 \u00a4 z \u00a1 \u00a4 M \u00a2\u00a4 1 and \u00a2 \u00a4 z \u00a1 \u00a4 D\u00a2 \u00a5 1, Case C: \u00a2 \u00a4 z \u00a1 \u00a4 M \u00a2 \u00a5 1 and \u00a2 \u00a4 z \u00a1 \u00a4 D\u00a2 \u00a4 1, Case D: \u00a2 \u00a4 z \u00a1 \u00a4 M \u00a2\u00a5 1 and \u00a2 \u00a4 z \u00a1 \u00a4 D\u00a2 \u00a5 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Case A is trivial, and D is a generalization of B and C.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Case B. It will be shown that\u00a8t \u00a7 Type such that", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "\u00a6 \u00a4 y \u00a7 \u00a4 z \u00a1 \u00a4 D and for \u00a4 x\u00a4 \u00a4 \u00a4 z \u00a1 \u00a4 M, t \u00a3 \u03b8 \u00a7 \u00a4 y\u00a9 and t \u00a3 \u03b8 \u00a7 \u00a4 x\u00a9 . Subcase B.i: \u00a4 x \u00a7 \u00a4 M \u00a4 x \u00a2 \u00a7 \u00a4 M s D . \u00a6 \u00a4 y \u00a7 \u00a4 z \u00a1 \u00a4 D, \u00a4 y \u00a4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "x. Therefore, according to Lemma 3,\u00a8x \u00a7 \u00a4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "x \u00a3 1 y \u00a7 \u00a4 y \u00a3 1 such that x y. Thus, according to Condition 2 of Definition 3, \u00a6 s \u00a3 \u03b8 \u00a7 y\u00a9 \u00a6 t \u00a3 \u03b8 \u00a7 x\u00a9 , s \u00a2 t \u00a3 . But according to Lemma 1, \u03b8 \u00a7 \u00a4 y\u00a9 \u00a3 \u03b8 \u00a7 y\u00a9 and \u03b8 \u00a7 \u00a4 x\u00a9 \u00a3 \u03b8 \u00a7 x\u00a9 . Therefore, \u00a6 \u00a4 y \u00a7 \u00a4 z \u00a1 \u00a4 D, \u00a6 s \u00a3 \u03b8 \u00a7 \u00a4 y\u00a9 , \u00a6 t \u00a3 \u03b8 \u00a7 \u00a4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "x\u00a9 , s \u00a2 t \u00a3 , and hence,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "\u00a6 \u00a4 y \u00a7 \u00a4 z \u00a1 \u00a4 D \u00a6 t \u00a3 \u03b8 \u00a7 \u00a4 x\u00a9 t \u00a2 \u03b8 \u00a7 \u00a4 y\u00a9 \u00a3 . Thus, according to Lemma 2,\u00a8t \u00a3 \u03b8 \u00a7 \u00a4 x\u00a9 \u00a6 \u00a4 y \u00a7 \u00a4 z \u00a1 \u00a4 D, t \u00a3 \u03b8 \u00a7 \u00a4 y\u00a9 . Subcase B.ii: \u00a4 x \u00a7 \u00a4 M \u00a4 x \u00a7 \u00a4 M s D . Since \u00a4 M s D \u00a2 \u00a4 D \u00a3 , t \u00a3 \u03b8 \u00a7 \u00a4 x\u00a9 such that \u00a6 \u00a4 y \u00a7 \u00a4 z \u00a1 \u00a4 D, t \u00a3 \u03b8 \u00a7 \u00a4 y\u00a9 . Case C. It will be shown that\u00a8t \u00a3 \u03b8 \u00a7 \u00a4 y\u00a9 such that \u00a6 \u00a4 x \u00a7 \u00a4 z , t \u00a3 \u03b8 \u00a7 \u00a4 x\u00a9 . Let \u00a4 y\u00a4 \u00a4 \u00a4 z \u00a1 \u00a4 D. The set \u00a4 z \u00a1 \u00a4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "M can be divided into two subsets:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "S ii \u00a4 \u00a4 x \u00a7 \u00a4 z \u00a1 \u00a4 M \u00a2 \u00a4 x \u00a7 \u00a4 M s D \u00a4 , and S i \u00a4 \u00a4 x \u00a7 \u00a4 z \u00a1 \u00a4 M \u00a2 \u00a4 x \u00a7 \u00a4 M \u00a4 x \u00a2 \u00a7 \u00a4 M s D , and x \u00a7 VariableCut \u00a7 M D\u00a9 \u00a4 . If x were in RigidCut \u00a7 M D\u00a9 , then necessarily \u00a2 \u00a4 z \u00a1 \u00a4 M \u00a2 would be 1. Since S ii \u00a4 M s D and \u00a4 M s D \u00a2 \u00a4 D \u00a3 , then t \u00a3 \u03b8 \u00a7 \u00a4 y\u00a9 such that \u00a6 \u00a4 x \u00a7 S ii t \u00a3 \u03b8 \u00a7 \u00a4 x\u00a9 (*). How- ever, \u00a6 \u00a4 x \u00a7 S ii , \u00a4 x \u00a4 y. Therefore, according to Lemma 3, \u00a6 \u00a4 x \u00a7 S ii x \u00a7 \u00a4 x \u00a3 1 y \u00a7 \u00a4 y \u00a3 1 such that x y. Thus, since x \u00a7 VariableCut \u00a7 M", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "D\u00a9 , Condition 2 of Definition 3 holds, and therefore, according to Lemma 1,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "\u00a6 s 1 \u00a3 \u03b8 \u00a7 \u00a4 x\u00a9 \u00a6 s 2 \u00a3 \u03b8 \u00a7 \u00a4 y\u00a9 s 1 \u00a2 s 2 \u00a3 . More than this, since t \u00a3 \u03b8 \u00a7 \u00a4 y\u00a9 (for the type t from (*)), \u00a6 s 1 \u00a3 \u03b8 \u00a7 \u00a4 x\u00a9 \u00a6 s 2 \u00a3 t s 1 \u00a2 s 2 \u00a3 , and hence, \u00a6 s 2 \u00a3 t s 2 \u00a2 \u03b8 \u00a7 \u00a4 x\u00a9 \u00a3 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Thus, according to Lemma 2 and to (*),\u00a8t", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "\u00a3 t \u00a3 \u03b8 \u00a7 \u00a4 y\u00a9 such that \u00a6 \u00a4 x \u00a7 S ii t \u00a3 \u03b8 \u00a7 \u00a4 x\u00a9 \u00a1 Thus, t such that \u00a6 \u00a4 x \u00a7 \u00a4 z , t \u00a3 \u03b8 \u00a7 \u00a4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "x\u00a9 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "While Proposition 1 could possibly be used by grammar developers to simplify TFSGs themselves at the source-code level, here we only exploit it for internally identifying index keys for more efficient chart parsing with the existing grammar. There may be better static analyses, and better uses of this static analysis. In particular, future work will focus on using static analysis to determine smaller representations (by cutting nodes in Static Cuts) of the chart edges themselves.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Static Analysis of Grammar Rules", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "The indexing schemes used in path indexing are built on the same principles as those in positional indexing. The main difference is the content of the indexing keys, which now includes a third element. Each mother M has its indexing scheme defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Building the Path Index", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "L \u00a7 M\u00a9 \u00a4 \u00a7 R i D j V i\u00a2 j \u00a9 \u00a4 . The pair \u00a7 R i D j \u00a9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Building the Path Index", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "is the positional index key (as in positional indexing), while V i\u00a2 j is the path index vector containing type values extracted from M. A different set of types is extracted for each mother-daughter pair. So, path indexing uses a two-layer indexing method: the positional key for daughters, and types extracted from the typed feature structure. Each daughter's index key is now given by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Building the Path Index", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "L \u00a7 D j \u00a9 \u00a4 \u00a7 R i V i\u00a2 j \u00a9 \u00a4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Building the Path Index", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": ", where R i is the rule number of a potentially matching mother, and V i\u00a2 j is the path index vector containing types extracted from D j .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Building the Path Index", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "The types extracted for the indexing vectors are those of nodes found at the end of indexing paths. A path \u03c0 is an indexing path for a motherdaughter pair", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Building the Path Index", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "\u00a7 M D\u00a9 iff: (1) \u03c0 is defined for both M and D, (2)\u00a8x \u00a7 StaticCut \u00a7 M D\u00a9 f s.t. \u03b4 \u00a7 f x\u00a9 \u00a4 \u03b4 \u00a7 \u03c0 q M \u00a9 (q M is M's root), and (3) \u03b4 \u00a7 \u03c0 q M \u00a9 \u00a2 \u00a7 StaticCut \u00a7 M", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Building the Path Index", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "D\u00a9 . Indexing paths are the \"frontiers\" of the non-statically-cut nodes of M.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Building the Path Index", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "A similar key extraction could be performed during Stage 2 of indexing (as outlined in Section 3), using \u00a4 M rather than M. We have found that this online path discovery is generally too expensive to be performed during parsing, however.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Building the Path Index", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "As stated in Proposition 1, the nodes in Figure 2 is \u00a7 third index neuter\u00a9 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 41, |
| "end": 49, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Building the Path Index", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "Inserting and retrieving edges from the chart using path indexing is similar to the general method presented at the beginning of this section. The first layer of the index is used to insert a mother as an edge into appropriate chart entries, according to the positional keys for the daughters it can match.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using the Path Index", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "Along with the mother, its path index vector is inserted into the chart. When searching for a matching edge for a daughter, the search is restricted by the first indexing layer to a single entry in the chart (labeled with the positional index key for the daughter). The second layer restricts searches to the edges that have a compatible path index vector. The compatibility is defined as type unification: the type pointed to by the element V i\u00a2 j \u00a7 n\u00a9 of an edge's vector V i\u00a2 j should unify with the type pointed to by the element V i\u00a2 j \u00a7 n \u00a9 of the path index vector V i\u00a2 j of the daughter on position D j in a rule R i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using the Path Index", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "Two TFSGs were used to evaluate the performance of indexing: a pre-release version of the MERGE grammar, and the ALE port of the ERG (in its final form). MERGE is an adaptation of the ERG which uses types more conservatively in favour of relations, macros and complex-antecedent constraints. This pre-release version has 17 rules, 136 lexical items, 1157 types, and 144 introduced features. The ERG port has 45 rules, 1314 lexical entries, 4305 types and 155 features. MERGE was tested on 550 sentences of lengths between 6 and 16 words, extracted from the Wall Street Journal annotated parse trees (where phrases not covered by MERGE's vocabulary were replaced by lexical entries having the same parts of speech), and from MERGE's own test corpus. ERG was tested on 1030 sentences of lengths between 6 and 22 words, extracted from the Brown Corpus and from the Wall Street Journal annotated parse trees.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Rather than use the current version of ALE, TFSs were encoded as Prolog terms as prescribed in (Penn, 1999a) , where the number of argument positions is the number of colours needed to colour the feature graph. This was extended to allow for the enforcement of type constraints during TFS unification. Types were encoded as attributed variables in SICStus Prolog (Swedish Institute of Computer Science, 2004).", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 108, |
| "text": "(Penn, 1999a)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Evaluation", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The average and best improvements in parsing times of positional and path indexing over the same EFDbased parser without indexing are presented in Table 1. The parsers were implemented in SICStus 3.10.1 for Solaris 8, running on a Sun Server with 16 GB of memory and 4 UltraSparc v.9 processors at 1281 MHz. For MERGE, parsing times range from 10 milliseconds to 1.3 seconds. For ERG, parsing times vary between 60 milliseconds and 29.2 seconds. Table 1 : Parsing time improvements of positional and path indexing over the non-indexed EFD parser.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 446, |
| "end": 453, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Positional and path indexing evaluation", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Non-statistical optimizations can be seen as a first step toward a highly efficient parser, while statistical optimization can be applied as a second step. However, one of the purposes of non-statistical indexing is to eliminate the burden of training while offering comparable improvements in parsing times. A quick-check parser was also built and evaluated and the set-up times for the indexed parsers and the quick-check parser were compared (Table 2) . Quick-check was trained on a 300-sentence training corpus, as prescribed in (Malouf et al., 2000) . The training corpus included 150 sentences also used in testing. The number of paths in path indexing is different for each mother-daughter pair, ranging from 1 to 43 over the two grammars.", |
| "cite_spans": [ |
| { |
| "start": 533, |
| "end": 554, |
| "text": "(Malouf et al., 2000)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 445, |
| "end": 454, |
| "text": "(Table 2)", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison with statistical optimizations", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Index Check Compiling grammar 6'30\" Compiling index 2\" 1'33\" -Training --3h28'14\" Total set-up time: 6'32\" 8'3\" 3h34'44\" As seen in Table 3 , quick-check alone surpasses positional and path indexing for the ERG. However, it is outperformed by them on the MERGE, recording slower times than even the baseline. But the combination of quick-check and path indexing is faster than quick-check alone on both grammars. Path indexing at best provided no decrease in performance over positional indexing alone in these experiments, attesting to the difficulty of maintaining efficient index keys in an implementation. The quick-check evaluation presented in (Malouf et al., 2000) uses only sentences with a length of at most 10 words, and the authors do not report the set-up times. Quick-check has an additional advantage in the present comparison, because half of the training sentences were included in the test corpus.", |
| "cite_spans": [ |
| { |
| "start": 650, |
| "end": 671, |
| "text": "(Malouf et al., 2000)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 132, |
| "end": 139, |
| "text": "Table 3", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Positional Path Quick Index", |
| "sec_num": null |
| }, |
| { |
| "text": "While quick-check improvements on the ERG confirm other reports on this method, it must be Table 4 : The number of successful and failed unifications for the non-indexed, positional indexing, path indexing, and quick-check parsers, over MERGE and ERG (collected on the slowest sentence in the corresponding test sets.) noted that quick-check appears to be parochially very well-suited to the ERG (indeed quick-check was developed alongside testing on the ERG). Although the recommended first 30 most probable failure-causing paths account for a large part of the failures recorded in training on both grammars (94% for ERG and 97% for MERGE), only 51 paths caused failures at all for MERGE during training, compared to 216 for the ERG. Further training with quick-check for determining a better vector length for MERGE did not improve its performance.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 91, |
| "end": 98, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Positional Path Quick Index", |
| "sec_num": null |
| }, |
| { |
| "text": "This discrepancy in the number of failure-causing paths could be resulting in an overfitted quick-check vector, or, perhaps the 30 paths chosen for MERGE really are not the best 30 (quick-check uses a greedy approximation). In addition, as shown in Table 4 , the improvements made by quick-check on the ERG are explained by the drastic reduction of (chart lookup) unification failures during parsing relative to the other methods. It appears that nothing short of a drastic reduction is necessary to justify the overhead of maintaining the index, which is the largest for quick-check because some of its paths must be traversed at run-time -path indexing only uses paths available at compile-time in the grammar source. Note that path indexing outperforms quick-check on MERGE in spite of its lower failure reduction rate, because of its smaller overhead.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 249, |
| "end": 256, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Positional Path Quick Index", |
| "sec_num": null |
| }, |
| { |
| "text": "The indexing method proposed here is suitable for several classes of unification-based grammars. The index keys are determined statically and are based on an a priori analysis of grammar rules. A major advantage of such indexing methods is the elimination of the lengthy training processes needed by statistical methods. Our experimental evaluation demonstrates that indexing by static analysis is a promising alternative to optimizing parsing with TFSGs, although the time consumed by on-line maintenance of the index is a significant concernechoes of an observation that has been made in applications of term indexing to databases and programming languages (Graf, 1996) . Further work on efficient implementations and data structures is therefore required. Indexing by static analysis of grammar rules combined with statistical methods also can provide a higher aggregate benefit.", |
| "cite_spans": [ |
| { |
| "start": 659, |
| "end": 671, |
| "text": "(Graf, 1996)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The current static analysis of grammar rules used as a basis for indexing does not consider the effect of the universally quantified constraints that typically augment the signature and grammar rules. Future work will investigate this extension as well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Because we do not assume inequated TFSs(Carpenter, 1992) here, unification failure must result from type inconsistency.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Compiling typed attribute-value logic grammars", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Carpenter", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Penn", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Recent Advances in Parsing Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "145--168", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Carpenter and G. Penn. 1996. Compiling typed attribute-value logic grammars. In H. Bunt and M. Tomita, editors, Recent Advances in Parsing Technologies, pages 145-168. Kluwer.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The Logic of Typed Feature Structures", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Carpenter", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Carpenter. 1992. The Logic of Typed Feature Structures. Cambridge University Press.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Abstract interpretation and application to logic programs", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Cousot", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Cousot", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Journal of Logic Programming", |
| "volume": "13", |
| "issue": "2-3", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Cousot and R. Cousot. 1992. Abstract interpre- tation and application to logic programs. Journal of Logic Programming, 13(2-3).", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Fundamentals of database systems", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Elmasri", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Navathe", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Elmasri and S. Navathe. 2000. Fundamentals of database systems. Addison-Wesley.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The English Resource Grammar", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Flickinger", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Flickinger. 1999. The English Resource Gram- mar. http://lingo.stanford.edu/erg.html.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Term Indexing", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Graf", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Graf. 1996. Term Indexing. Springer.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A bag of useful techniques for efficient and robust parsing", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Kiefer", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "U" |
| ], |
| "last": "Krieger", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Carroll", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Malouf", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the 37th Annual Meeting of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Kiefer, H.U. Krieger, J. Carroll, and R. Malouf. 1999. A bag of useful techniques for efficient and robust parsing. In Proceedings of the 37th An- nual Meeting of the ACL.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Efficient feature structure operations without compilation", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Malouf", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Carrol", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Copestake", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Natural Language Engineering", |
| "volume": "6", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Malouf, J. Carrol, and A. Copestake. 2000. Effi- cient feature structure operations without compi- lation. Natural Language Engineering, 6(1).", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A tabulationbased parsing method that reduces copying", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Penn", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Munteanu", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 41st Annual Meeting of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Penn and C. Munteanu. 2003. A tabulation- based parsing method that reduces copying. In Proceedings of the 41st Annual Meeting of the ACL, Sapporo, Japan.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "An optimised Prolog encoding of typed feature structures", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Penn", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Penn. 1999a. An optimised Prolog encoding of typed feature structures. Technical Report 138, SFB 340, T\u00fcbingen.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Optimising don't-care nondeterminism with statistical information", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Penn", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Penn. 1999b. Optimising don't-care non- determinism with statistical information. Techni- cal Report 140, SFB 340, T\u00fcbingen.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Head-driven Phrase Structure Grammar", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Pollard", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Sag", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Pollard and I. Sag. 1994. Head-driven Phrase Structure Grammar. The University of Chicago Press.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Term indexing. In Handbook of Automated Reasoning", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [ |
| "V" |
| ], |
| "last": "Ramakrishnan", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Sekar", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Voronkov", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "II", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "I.V. Ramakrishnan, R. Sekar, and A. Voronkov. 2001. Term indexing. In Handbook of Auto- mated Reasoning, volume II, chapter 26. Elsevier Science.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "num": null, |
| "text": "A phrase rule stating that the syntactic category s can be combined from np and vp if their values for agr are the same. The semantics of s is that of the verb phrase, while the semantics of the noun phrase serves as agent.", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "num": null, |
| "text": "we symbolize the mother M after M's rule is completed (all of the rule's daughters are matched with edges in the chart). \u00a4 D symbolizes the daughter D after all daughters to D's left in D's rule were unified with edges from the chart. An important relation exists between M and \u00a4", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "num": null, |
| "text": "Given the above type signature, mother M and daughter D (externally shared nodes are pointed to by dashed arrows), nodes x 1 \u00a4 x and x 3 from M can be left out when unifying M with D during parsing. x 1 and x 3", |
| "uris": null |
| }, |
| "FIGREF5": { |
| "type_str": "figure", |
| "num": null, |
| "text": "Therefore, the types of first nodes not included in StaticCut \u00a7 M D\u00a9 along each path \u03c0 that stems from the root of M and D are included in the indexing key, since these nodes might contribute to the success/failure of the unification. It should be mentioned that the vectors V i\u00a2 j are filled with values extracted from \u00a4 M after M's rule is completed, and from \u00a4 D after all daughters to the left of D are unified with edges in the chart. As an example, assuming that the indexing paths are THROWER:PERSON, THROWN, and THROWN:GENDER, the path index vector for the TFS shown in", |
| "uris": null |
| }, |
| "TABREF2": { |
| "html": null, |
| "num": null, |
| "content": "<table><tr><td/><td/><td/><td/><td>\u00a4 M and</td><td>\u00a4 D. The StaticCut</td></tr><tr><td colspan=\"5\">can be divided into two subsets: StaticCut RigidCut \u00a7 M D\u00a9 \u00a1 VariableCut \u00a7 \u00a7 M D\u00a9 \u00a4 M D\u00a9</td></tr><tr><td colspan=\"5\">Definition 2. RigidCut of nodes x \u00a7 M such that, \u00a7 M D\u00a9 is the largest subset \u00a6 y D for which x y: \u00a7 1. x \u00a2 \u00a7 Ext \u00a7 M\u00a9 , y \u00a2 \u00a7 Ext \u00a7 D\u00a9 , 2. \u00a6 x \u00a7 M s.t.\u00a8\u03c0 s.t. \u03b4 \u00a7 \u03c0 x \u00a9 \u00a4 x, x \u00a2 \u00a7 Ext \u00a7 M\u00a9 , and 3. \u00a6 y \u00a7 D s.t.\u00a8\u03c0 s.t. \u03b4 \u00a7 \u03c0 y \u00a9 \u00a4 y, y \u00a2 \u00a7 Ext \u00a7 D\u00a9 .</td></tr><tr><td colspan=\"5\">Definition 3. VariableCut is the largest subset of</td></tr><tr><td colspan=\"2\">nodes x</td><td>\u00a7</td><td colspan=\"2\">M such that:</td></tr><tr><td>1. x 2.</td><td>\u00a2 \u00a7</td><td colspan=\"2\">RigidCut</td><td>M \u00a7 D\u00a9 , and</td></tr></table>", |
| "text": "The RigidCut represents nodes that can be left out because neither they, nor one of their \u03b4 \u03c0 -ancestors, can have their type values changed by means of external variable sharing. The VariableCut represents nodes that are either externally shared, or have an externally shared ancestor, but still can be left out.", |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "html": null, |
| "num": null, |
| "content": "<table/>", |
| "text": "The set-up times for non-statistically indexed parsers and statistically optimized parsers for MERGE.", |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "html": null, |
| "num": null, |
| "content": "<table/>", |
| "text": "Comparison of average improvements over nonindexed parsing among all parsers.", |
| "type_str": "table" |
| } |
| } |
| } |
| } |