| { |
| "paper_id": "P11-1046", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:46:49.818952Z" |
| }, |
| "title": "Optimal Head-Driven Parsing Complexity for Linear Context-Free Rewriting Systems", |
| "authors": [ |
| { |
| "first": "Pierluigi", |
| "middle": [], |
| "last": "Crescenzi", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Universit\u00e0 di Firenze", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Rochester", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Marino", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Gianluca", |
| "middle": [], |
| "last": "Rossi", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Universit\u00e0 di Padova", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We study the problem of finding the best headdriven parsing strategy for Linear Context-Free Rewriting System productions. A headdriven strategy must begin with a specified righthand-side nonterminal (the head) and add the remaining nonterminals one at a time in any order. We show that it is NP-hard to find the best head-driven strategy in terms of either the time or space complexity of parsing.", |
| "pdf_parse": { |
| "paper_id": "P11-1046", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We study the problem of finding the best headdriven parsing strategy for Linear Context-Free Rewriting System productions. A headdriven strategy must begin with a specified righthand-side nonterminal (the head) and add the remaining nonterminals one at a time in any order. We show that it is NP-hard to find the best head-driven strategy in terms of either the time or space complexity of parsing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Linear Context-Free Rewriting Systems (LCFRSs) (Vijay-Shankar et al., 1987 ) constitute a very general grammatical formalism which subsumes contextfree grammars (CFGs) and tree adjoining grammars (TAGs), as well as the synchronous context-free grammars (SCFGs) and synchronous tree adjoining grammars (STAGs) used as models in machine translation. 1 LCFRSs retain the fundamental property of CFGs that grammar nonterminals rewrite independently, but allow nonterminals to generate discontinuous phrases, that is, to generate more than one span in the string being produced. This important feature has been recently exploited by Maier and S\u00f8gaard (2008) and Kallmeyer and Maier (2010) for modeling phrase structure treebanks with discontinuous constituents, and by for modeling non-projective dependency treebanks.", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 74, |
| "text": "(Vijay-Shankar et al., 1987", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 348, |
| "end": 349, |
| "text": "1", |
| "ref_id": null |
| }, |
| { |
| "start": 628, |
| "end": 652, |
| "text": "Maier and S\u00f8gaard (2008)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 657, |
| "end": 683, |
| "text": "Kallmeyer and Maier (2010)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The rules of a LCFRS can be analyzed in terms of the properties of rank and fan-out. Rank is the number of nonterminals on the right-hand side (rhs) of a rule, while fan-out is the number of spans of the string generated by the nonterminal in the lefthand side (lhs) of the rule. CFGs are equivalent to LCFRSs with fan-out one, while TAGs are one type of LCFRSs with fan-out two. Rambow and Satta (1999) show that rank and fan-out induce an infinite, two-dimensional hierarchy in terms of generative power; while CFGs can always be reduced to rank two (Chomsky Normal Form), this is not the case for LCFRSs with any fan-out greater than one.", |
| "cite_spans": [ |
| { |
| "start": 380, |
| "end": 403, |
| "text": "Rambow and Satta (1999)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "General algorithms for parsing LCFRSs build a dynamic programming chart of recognized nonterminals bottom-up, in a manner analogous to the CKY algorithm for CFGs (Hopcroft and Ullman, 1979) , but with time and space complexity that are dependent on the rank and fan-out of the grammar rules. Whenever it is possible, binarization of LCFRS rules, or reduction of rank to two, is therefore important for parsing, as it reduces the time complexity needed for dynamic programming. This has lead to a number of binarization algorithms for LCFRSs, as well as factorization algorithms that factor rules into new rules with smaller rank, without necessarily reducing rank all the way to two. present an algorithm for binarizing certain LCFRS rules without increasing their fan-out, and Sagot and Satta (2010) show how to reduce rank to the lowest value possible for LCFRS rules of fan-out two, again without increasing fan-out. G\u00f3mez-Rodr\u00edguez et al. (2010) show how to factorize well-nested LCFRS rules of arbitrary fan-out for efficient parsing.", |
| "cite_spans": [ |
| { |
| "start": 162, |
| "end": 189, |
| "text": "(Hopcroft and Ullman, 1979)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 778, |
| "end": 800, |
| "text": "Sagot and Satta (2010)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 920, |
| "end": 949, |
| "text": "G\u00f3mez-Rodr\u00edguez et al. (2010)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In general there may be a trade-off required between rank and fan-out, and a few recent papers have investigated this trade-off taking gen-eral LCFRS rules as input. G\u00f3mez-Rodr\u00edguez et al. (2009) present an algorithm for binarization of LCFRSs while keeping fan-out as small as possible. The algorithm is exponential in the resulting fan-out, and G\u00f3mez-Rodr\u00edguez et al. (2009) mention as an important open question whether polynomialtime algorithms to minimize fan-out are possible. Gildea (2010) presents a related method for binarizing rules while keeping the time complexity of parsing as small as possible. Binarization turns out to be possible with no penalty in time complexity, but, again, the factorization algorithm is exponential in the resulting time complexity. Gildea (2011) shows that a polynomial time algorithm for factorizing LCFRSs in order to minimize time complexity would imply an improved approximation algorithm for the well-studied graph-theoretic property known as treewidth. However, whether the problem of factorizing LCFRSs in order to minimize time complexity is NP-hard is still an open question in the above works.", |
| "cite_spans": [ |
| { |
| "start": 166, |
| "end": 195, |
| "text": "G\u00f3mez-Rodr\u00edguez et al. (2009)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 347, |
| "end": 376, |
| "text": "G\u00f3mez-Rodr\u00edguez et al. (2009)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 774, |
| "end": 787, |
| "text": "Gildea (2011)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Similar questions have arisen in the context of machine translation, as the SCFGs used to model translation are also instances of LCFRSs, as already mentioned. For SCFG, Satta and Peserico (2005) showed that the exponent in the time complexity of parsing algorithms must grow at least as fast as the square root of the rule rank, and Gildea an\u010f Stefankovi\u010d (2007) tightened this bound to be linear in the rank. However, neither paper provides an algorithm for finding the best parsing strategy, and Huang et al. (2009) mention that whether finding the optimal parsing strategy for an SCFG rule is NPhard is an important problem for future work.", |
| "cite_spans": [ |
| { |
| "start": 170, |
| "end": 195, |
| "text": "Satta and Peserico (2005)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 334, |
| "end": 363, |
| "text": "Gildea an\u010f Stefankovi\u010d (2007)", |
| "ref_id": null |
| }, |
| { |
| "start": 499, |
| "end": 518, |
| "text": "Huang et al. (2009)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we investigate the problem of rule binarization for LCFRSs in the context of headdriven parsing strategies. Head-driven strategies begin with one rhs symbol, and add one nonterminal at a time. This rules out any factorization in which two subsets of nonterminals of size greater than one are combined in a single step. Head-driven strategies allow for the techniques of lexicalization and Markovization that are widely used in (projective) statistical parsing (Collins, 1997) . The statistical LCFRS parser of Kallmeyer and Maier (2010) binarizes rules head-outward, and therefore adopts what we refer to as a head-driven strategy. However, the binarization used by Kallmeyer and Maier (2010) simply proceeds left to right through the rule, without considering the impact of the parsing strategy on either time or space complexity. We examine the question of whether we can efficiently find the strategy that minimizes either the time complexity or the space complexity of parsing. While a naive algorithm can evaluate all r! head-driven strategies in time O(n \u2022 r!), where r is the rule's rank and n is the total length of the rule's description, we wish to determine whether a polynomial-time algorithm is possible.", |
| "cite_spans": [ |
| { |
| "start": 475, |
| "end": 490, |
| "text": "(Collins, 1997)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 525, |
| "end": 551, |
| "text": "Kallmeyer and Maier (2010)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 681, |
| "end": 707, |
| "text": "Kallmeyer and Maier (2010)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Since parsing problems can be cast in terms of logic programming (Shieber et al., 1995) , we note that our problem can be thought of as a type of query optimization for logic programming. Query optimization for logic programming is NP-complete since query optimization for even simple conjunctive database queries is NP-complete (Chandra and Merlin, 1977) . However, the fact that variables in queries arising from LCFRS rules correspond to the endpoints of spans in the string to be parsed means that these queries have certain structural properties (Gildea, 2011) . We wish to determine whether the structure of LCFRS rules makes efficient factorization algorithms possible.", |
| "cite_spans": [ |
| { |
| "start": 65, |
| "end": 87, |
| "text": "(Shieber et al., 1995)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 329, |
| "end": 355, |
| "text": "(Chandra and Merlin, 1977)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 551, |
| "end": 565, |
| "text": "(Gildea, 2011)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the following, we show both the the time-and space-complexity problems to be NP-hard for headdriven strategies. We provide what is to our knowledge the first NP-hardness result for a grammar factorization problem, which we hope will aid in understanding parsing algorithms in general.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this section we briefly introduce LCFRSs and define the problem of optimizing head-driven parsing complexity for these formalisms. For a positive integer n, we write [n] to denote the set {1, . . . , n}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "As already mentioned in the introduction, LCFRSs generate tuples of strings over some finite alphabet. This is done by associating each production p of a grammar with a function g that takes as input the tuples generated by the nonterminals in p's rhs, and rearranges their string components into a new tuple, possibly adding some alphabet symbols.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Let V be some finite alphabet. We write V * for the set of all (finite) strings over V . For natural numbers r \u2265 0 and f, f 1 , . . . , f r \u2265 1, consider a func-tion g :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(V * ) f 1 \u00d7 \u2022 \u2022 \u2022 \u00d7 (V * ) fr \u2192 (V * ) f defined by an equation of the form g( x 1,1 , . . . , x 1,f 1 , . . . , x r,1 , . . . , x r,fr ) = \u03b1 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Here the x i,j 's denote variables over strings in V * , and \u03b1 = \u03b1 1 , . . . , \u03b1 f is an f -tuple of strings over g's argument variables and symbols in V . We say that g is linear, non-erasing if \u03b1 contains exactly one occurrence of each argument variable. We call r and f the rank and the fan-out of g, respectively, and write r(g) and f (g) to denote these quantities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Example 1 g 1 ( x 1,1 , x 1,2 ) = x 1,1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "x 1,2 takes as input a tuple with two strings and returns a tuple with a single string, obtained by concatenating the components in the input tuple. g 2 ( x 1,1 , x 1,2 ) = ax 1,1 b, cx 1,2 d takes as input a tuple with two strings and wraps around these strings with symbols a, b, c, d \u2208 V . Both functions are linear, nonerasing, and we have r(g 1 ) = r(g 2 ) = 1, f (g 1 ) = 1 and f (g 2 ) = 2. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "G = (V N , V T , P, S),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where V N and V T are finite, disjoint alphabets of nonterminal and terminal symbols, respectively. Each A \u2208 V N is associated with a value f (A), called its fan-out. The nonterminal S is the start symbol, with f (S) = 1. Finally, P is a set of productions of the form", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p : A \u2192 g(A 1 , A 2 , . . . , A r(g) ) ,", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A, A 1 , . . . , A r(g) \u2208 V N , and g : (V * T ) f (A 1 ) \u00d7 \u2022 \u2022 \u2022 \u00d7 (V * T ) f (A r(g) ) \u2192 (V * T ) f (A)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "is a linear, nonerasing function.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Production (1) can be used to transform the r(g) string tuples generated by the nonterminals A 1 , . . . , A r(g) into a tuple of f (A) strings generated by A. The values r(g) and f (g) are called the rank and fan-out of p, respectively, written r(p) and f (p). Given that f (S) = 1, S generates a set of strings, defining the language L(G).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Example 2 Let g 1 and g 2 be as in Example 1, and let g 3 () = \u03b5, \u03b5 . Consider the LCFRS G defined by the productions", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "p 1 : S \u2192 g 1 (A), p 2 : A \u2192 g 2 (A) and p 3 : A \u2192 g 3 (). We have f (S) = 1, f (A) = f (G) = 2, r(p 3 ) = 0 and r(p 1 ) = r(p 2 ) = r(G) = 1. We have L(G) = {a n b n c n d n | n \u2265 1}. For in- stance, the string a 3 b 3 c 3 d 3 is generated by means fan-out strategy 4 ((A 1 \u2295 A 4 ) \u2295 A 3 ) * \u2295 A 2 3 (A 1 \u2295 A 4 ) * \u2295 (A 2 \u2295 A 3 ) 3 ((A 1 \u2295 A 2 ) * \u2295 A 4 ) \u2295 A 3 2 ((A * 2 \u2295 A 3 ) \u2295 A 4 ) \u2295 A 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Figure 1: Some parsing strategies for production p in Example 3, and the associated maximum value for fan-out. Symbol \u2295 denotes the merging operation, and superscript * marks the first step in the strategy in which the highest fan-out is realized.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "of the following bottom-up process. First, the tuple \u03b5, \u03b5 is generated by A through p 3 . We then iterate three times the application of p 2 to \u03b5, \u03b5 , resulting in the tuple a 3 b 3 , c 3 d 3 . Finally, the tuple (string) a 3 b 3 c 3 d 3 is generated by S through application of p 1 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LCFRSs and parsing complexity", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Existing parsing algorithms for LCFRSs exploit dynamic programming. These algorithms compute partial parses of the input string w, represented by means of specialized data structures called items. Each item indexes the boundaries of the segments of w that are spanned by the partial parse. In the special case of parsing based on CFGs, an item consists of two indices, while for TAGs four indices are required.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "In the general case of LCFRSs, parsing of a production p as in (1) can be carried out in r(g) \u2212 1 steps, collecting already available parses for nonterminals A 1 , . . . , A r(g) one at a time, and 'merging' these into intermediate partial parses. We refer to the order in which nonterminals are merged as a parsing strategy, or, equivalently, a factorization of the original grammar rule. Any parsing strategy results in a complete parse of p, spanning f (p) = f (A) segments of w and represented by some item with 2f (A) indices. However, intermediate items obtained in the process might span more than f (A) segments. We illustrate this through an example.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "g( x 1,1 , x 1,2 , x 2,1 , x 2,2 , x 3,1 , x 3,2 , x 4,1 , x 4,2 ) = x 1,1 x 2,1 x 3,1 x 4,1 , x 3,2 x 2,2 x 4,2 x 1,2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 3 Consider a linear non-erasing function", |
| "sec_num": null |
| }, |
| { |
| "text": ", and a production p : A \u2192 g(A 1 , A 2 , A 3 , A 4 ), where all the nonterminals involved have fan-out 2. We could parse p starting from A 1 , and then merging with A 3 , and A 2 . In this case, after we have collected the first three nonterminals, we have obtained a partial parse having fan-out 4, that is, an item spanning 4 segments of the input string. Alternatively, we could first merge A 1 and A 4 , then merge A 2 and A 3 , and finally merge the two obtained partial parses. This strategy is slightly better, resulting in a maximum fan-out of 3. Other possible strategies can be explored, displayed in Figure 1 . It turns out that the best parsing strategy leads to fan-out 2.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 611, |
| "end": 619, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Example 3 Consider a linear non-erasing function", |
| "sec_num": null |
| }, |
| { |
| "text": "A 4 , v 1 v 2 v 3 v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Example 3 Consider a linear non-erasing function", |
| "sec_num": null |
| }, |
| { |
| "text": "The maximum fan-out f realized by a parsing strategy determines the space complexity of the parsing algorithm. For an input string w, items will require (in the worst-case) 2f indices, each taking O(|w|) possible values. This results in space complexity of O(|w| 2f ). In the special cases of parsing based on CFGs and TAGs, this provides the wellknown space complexity of O(|w| 2 ) and O(|w| 4 ), respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "It can also be shown that, if a partial parse having fan-out f is obtained by means of the combination of two partial parses with fan-out f 1 and f 2 , respectively, the resulting time complexity will be O(|w| f +f 1 +f 2 ) (Seki et al., 1991; Gildea, 2010) . As an example, in the case of parsing based on CFGs, nonterminals as well as partial parses all have fanout one, resulting in the standard time complexity of O(|w| 3 ) of dynamic programming methods. When parsing with TAGs, we have to manipulate objects with fan-out two (in the worst case), resulting in time complexity of O(|w| 6 ).", |
| "cite_spans": [ |
| { |
| "start": 224, |
| "end": 243, |
| "text": "(Seki et al., 1991;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 244, |
| "end": 257, |
| "text": "Gildea, 2010)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "We investigate here the case of general LCFRS productions, whose internal structure is considerably more complex than the context-free or the tree adjoining case. Optimizing the parsing complexity for a production means finding a parsing strategy that results in minimum space or time complexity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "We now turn the above optimization problems into decision problems. In the MIN SPACE STRAT-EGY problem one takes as input an LCFRS production p and an integer k, and must decide whether there exists a parsing strategy for p with maximum fan-out not larger than k. In the MIN TIME STRAT-EGY problem one is given p and k as above and must decide whether there exists a parsing strategy for p such that, in any of its steps merging two partial parses with fan-out f 1 and f 2 and resulting in a partial parse with fan-out f , the relation f +f 1 +f 2 \u2264 k holds.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper we investigate the above problems in the context of a specific family of linguistically motivated parsing strategies for LCFRSs, called headdriven. In a head-driven strategy, one always starts parsing a production p from a fixed nonterminal in its rhs, called the head of p, and merges the remaining nonterminals one at a time with the partial parse containing the head. Thus, under these strategies, the construction of partial parses that do not include the head is forbidden, and each parsing step involves at most one partial parse. In Figure 1 , all of the displayed strategies but the one in the second line are head-driven (for different choices of the head).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 554, |
| "end": 562, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "For an LCFRS production p, let H be its head nonterminal, and let A 1 , . . . , A n be all the non-head nonterminals in p's rhs, with n + 1 = r(p). A headdriven parsing strategy can be represented as a permutation \u03c0 over the set [n] , prescribing that the nonhead nonterminals in p's rhs should be merged with H in the order A \u03c0(1) , A \u03c0(2) , . . . , A \u03c0(n) . Note that there are n! possible head-driven parsing strategies.", |
| "cite_spans": [ |
| { |
| "start": 229, |
| "end": 232, |
| "text": "[n]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To show that MIN SPACE STRATEGY is NPhard under head-driven parsing strategies, we reduce from the MIN CUT LINEAR ARRANGEMENT problem, which is a decision problem over (undirected) graphs. Given a graph M = (V, E) with set of vertices V and set of edges E, a linear arrangement of M is a bijective function h from V to [n], where |V | = n. The cutwidth of M at gap i \u2208 [n \u2212 1] and with respect to a linear arrangement h is the number of edges crossing the gap between the i-th vertex and its successor: Figure 3 : The construction used to prove Theorem 1 builds the LCFRS production p shown, when given as input the graph of Figure 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 503, |
| "end": 511, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 625, |
| "end": 633, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "cw(M, h, i) = |{(u, v) \u2208 E | h(u) \u2264 i < h(v)}| . p : A \u2192 g(H, A 1 , A 2 , A 3 , A 4 ) g( x H,e 1 , x H,e 2 , x H,e 3 , x H,e 4 , x A 1 ,e 1 ,l , x A 1 ,e 1 ,r , x A 1 ,e 3 ,l , x A 1 ,e 3 ,r , x A 2 ,e 1 ,l , x A 2 ,e 1 ,r , x A 2 ,e 2 ,l , x A 2 ,e 2 ,r , x A 3 ,e 2 ,l , x A 3 ,e 2 ,r , x A 3 ,e 3 ,l , x A 3 ,e 3 ,r , x A 3 ,e 4 ,l , x A 3 ,e 4 ,r , x A 4 ,e 4 ,l , x A 4 ,e 4 ,r ) = x A 1 ,e 1 ,l x A 2 ,e 1 ,l x H,e 1 x A 1 ,e 1 ,r x A 2 ,e 1 ,r , x A 2 ,e 2 ,l x A 3 ,e 2 ,l x H,e 2 x A 2 ,e 2 ,r x A 3 ,e 2 ,r , x A 1 ,e 3 ,l x A 3 ,e 3 ,l x H,e 3 x A 1 ,e 3 ,r x A 3 ,e 3 ,r , x A 3 ,e 4 ,l x A 4 ,e 4 ,l x H,e 4 x A 3 ,e 4 ,r x A 4 ,e 4 ,r", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The cutwidth of M is then defined as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "cw(M ) = min h max i\u2208[n\u22121] cw(M, h, i) .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In the MIN CUT LINEAR ARRANGEMENT problem, one is given as input a graph M and an integer k, and must decide whether cw(M ) \u2264 k. This problem has been shown to be NP-complete (Gavril, 1977) . Theorem 1 The MIN SPACE STRATEGY problem restricted to head-driven parsing strategies is NPcomplete.", |
| "cite_spans": [ |
| { |
| "start": 175, |
| "end": 189, |
| "text": "(Gavril, 1977)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "PROOF We start with the NP-hardness part. Let M = (V, E) and k be an input instance for MIN CUT LINEAR ARRANGEMENT, and let V = {v 1 , . . . , v n } and E = {e 1 , . . . , e q }. We assume there are no self loops in M , since these loops do not affect the value of the cutwidth and can therefore be removed. We construct an LCFRS production p and an integer k \u2032 as follows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Production p has a head nonterminal H and a nonhead nonterminal A i for each vertex v i \u2208 V . We let H generate tuples with a string component for each edge e i \u2208 E. Thus, we have f (H) = q. Accordingly, we use variables x H,e i , for each e i \u2208 E, to denote the string components in tuples generated by H.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "each v i \u2208 V , let E(v i ) \u2286 E be the set of edges impinging on v i ; thus |E(v i )| is the degree of v i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We let A i generate a tuple with two string components for each e j \u2208 E(v i ). Thus, we have f (A i ) = 2 \u2022 |E(v i )|. Accordingly, we use variables x A i ,e j ,l and x A i ,e j ,r , for each e j \u2208 E(v i ), to denote the string components in tuples generated by A i (here subscripts l and r indicate left and right positions, respectively; see below).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We set r(p) = n + 1 and f (p) = q, and define p by A \u2192 g(H, A 1 , A 2 , . . . , A n ), with", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "g(t H , t A 1 , . . . , t An ) = \u03b1 1 , . . . , \u03b1 q .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Here t H is the tuple of variables for H and each t A i , i \u2208 [n], is the tuple of variables for A i . Each string \u03b1 i , i \u2208 [q], is specified as follows. Let v s and v t be the endpoints of e i , with v s , v t \u2208 V and s < t. We define", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u03b1 i = x As,e i ,l x At,e i ,l x H,e i x As,e i ,r x At,e i ,r .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Observe that whenever edge e i impinges on vertex v j , then the left and right strings generated by A j and associated with e i wrap around the string generated by H and associated with the same edge. Finally, we set k \u2032 = q + k.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Example 4 Given the input graph of Figure 2 , our reduction constructs the LCFRS production shown in Figure 3 . Figure 4 gives a visualization of how the spans in this production fit together. For each edge in the graph of Figure 2 , we have a group of five spans in the production: one for the head nonterminal, and two spans for each of the two nonterminals corresponding to the edge's endpoints.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 35, |
| "end": 43, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 101, |
| "end": 109, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 112, |
| "end": 120, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 223, |
| "end": 231, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "2 Assume now some head-driven parsing strategy \u03c0 for p. For each i \u2208 [n], we define D \u03c0 i to be the partial parse obtained after step i in \u03c0, consisting of the merge of nonterminals H, A \u03c0(1) , . . . , A \u03c0(i) . Consider some edge e j = (v s , v t ). We observe that for any D \u03c0 i that includes or excludes both nonterminals A s and A t , the \u03b1 j component in the definition of p is associated with a single string, and therefore contributes with a single unit to the fan-out of the partial parse. On the other hand, if D \u03c0 i includes only one nonterminal between A s and A t , the \u03b1 j component is associated with two strings and contributes with two units to the fan-out of the partial parse.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We can associate with \u03c0 a linear arrangement", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "h \u03c0 of M by letting h \u03c0 (v \u03c0(i) ) = i, for each v i \u2208 V .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "From the above observation on the fan-out of D \u03c0 i ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "x A1,e1,l x A2,e1,l x H,e1 x A1,e1,r x A2,e1,r x A2,e2,l x A3,e2,l x H,e2 x A2,e2,r x A3,e2,r x A1,e3,l x A3,e3,l x H,e3 x A1,e3,r x A3,e3,r x A3,e4,l x A4,e4,l x H,e4 x A3,e4,r x A4,e4,r H A 1 A 2 A 3 A 4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Figure 4: A visualization of how the spans for each nonterminal fit together in the left-to-right order defined by the production of Figure 3 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 133, |
| "end": 141, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "we have the following relation, for every i \u2208 [n \u2212 1]:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "f (D \u03c0 i ) = q + cw(M, h \u03c0 , i) .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We can then conclude that M, k is a positive instance of MIN CUT LINEAR ARRANGEMENT if and only if p, k \u2032 is a positive instance of MIN SPACE STRAT-EGY. This proves that MIN SPACE STRATEGY is NP-hard. To show that MIN SPACE STRATEGY is in NP, consider a nondeterministic algorithm that, given an LCFRS production p and an integer k, guesses a parsing strategy \u03c0 for p, and tests whether f (D \u03c0 i ) \u2264 k for each i \u2208 [n]. The algorithm accepts or rejects accordingly. Such an algorithm can clearly be implemented to run in polynomial time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We now turn to the MIN TIME STRATEGY problem, restricted to head-driven parsing strategies. Recall that we are now concerned with the quantity f 1 + f 2 + f , where f 1 is the fan-out of some partial parse D, f 2 is the fan-out of a nonterminal A, and f is the fan out of the partial parse resulting from the merge of the two previous analyses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We need to introduce the MODIFIED CUTWIDTH problem, which is a variant of the MIN CUT LIN-EAR ARRANGEMENT problem. Let M = (V, E) be some graph with |V | = n, and let h be a linear arrangement for M . The modified cutwidth of M at position i \u2208 [n] and with respect to h is the number of edges crossing over the i-th vertex:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "mcw(M, h, i) = |{(u, v) \u2208 E | h(u) < i < h(v)}| .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The modified cutwidth of M is defined as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "mcw(M ) = min h max i\u2208[n] mcw(M, h, i) .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In the MODIFIED CUTWIDTH problem one is given as input a graph M and an integer k, and must decide whether mcw(M ) \u2264 k. The MODIFIED CUTWIDTH problem has been shown to be NPcomplete by Lengauer (1981) . We strengthen this result below; recall that a cubic graph is a graph without self loops where each vertex has degree three.", |
| "cite_spans": [ |
| { |
| "start": 185, |
| "end": 200, |
| "text": "Lengauer (1981)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NP-completeness results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The MODIFIED CUTWIDTH problem restricted to cubic graphs is NP-complete.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lemma 1", |
| "sec_num": null |
| }, |
| { |
| "text": "PROOF The MODIFIED CUTWIDTH problem has been shown to be NP-complete when restricted to graphs of maximum degree three by Makedon et al. (1985) , reducing from a graph problem known as bisection width (see also Monien and Sudborough (1988) ). Specifically, the authors construct a graph G \u2032 of maximum degree three and an integer k \u2032 from an input graph G = (V, E) with an even number n of vertices and an integer k, such that mcw(G \u2032 ) \u2264 k \u2032 if and only if the bisection width bw(G) of G is not greater than k, where", |
| "cite_spans": [ |
| { |
| "start": 122, |
| "end": 143, |
| "text": "Makedon et al. (1985)", |
| "ref_id": null |
| }, |
| { |
| "start": 211, |
| "end": 239, |
| "text": "Monien and Sudborough (1988)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lemma 1", |
| "sec_num": null |
| }, |
| { |
| "text": "bw(G) = min A,B\u2286V |{(u, v) \u2208 E | u \u2208 A \u2227 v \u2208 B}| with A \u2229 B = \u2205, A \u222a B = V , and |A| = |B|.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lemma 1", |
| "sec_num": null |
| }, |
| { |
| "text": "The graph G \u2032 has vertices of degree two and three only, and it is based on a grid-like gadget R(r, c); see Figure 5 . For each vertex of G, G \u2032 includes a component R(2n 4 , 8n 4 +8). Moreover, G \u2032 has a component called an H-shaped graph, containing left and right columns R(3n 4 , 12n 4 + 12) connected by a middle bar R(2n 4 , 12n 4 + 9); see Figure 6 . From each of the n vertex components there is a sheaf of 2n 2 edges connecting distinct degree 2 vertices in the component to 2n 2 distinct degree 2 vertices in the middle bar of the H-shaped graph. Finally, for each edge", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 108, |
| "end": 116, |
| "text": "Figure 5", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 347, |
| "end": 355, |
| "text": "Figure 6", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lemma 1", |
| "sec_num": null |
| }, |
| { |
| "text": "(v i , v j )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lemma 1", |
| "sec_num": null |
| }, |
| { |
| "text": "of G there is an edge in G \u2032 connecting a degree 2 vertex in the component corresponding to the vertex v i with a degree 2 vertex in the component corresponding to the vertex v j . The integer k \u2032 is set to 3n 4 + n 3 + k \u2212 1. Makedon et al. (1985) show that the modified cutwidth of R(r, c) is r \u2212 1 whenever r \u2265 3 and c \u2265 4r + 8. They also show that an optimal linear arrangement for G \u2032 has the form depicted in Figure 6 , where half of the vertex components are to the left of the H-shaped graph and all the other vertex components are to the right. In this arrangement, the modified cutwidth is attested by the number of edges crossing over the vertices in the left and right columns of the H-shaped graph, which is equal to", |
| "cite_spans": [ |
| { |
| "start": 227, |
| "end": 248, |
| "text": "Makedon et al. (1985)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 415, |
| "end": 423, |
| "text": "Figure 6", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lemma 1", |
| "sec_num": null |
| }, |
| { |
| "text": "3n 4 \u2212 1 + n 2 2n 2 + \u03b3 = 3n 4 + n 3 + \u03b3 \u2212 1 (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lemma 1", |
| "sec_num": null |
| }, |
| { |
| "text": "where \u03b3 denotes the number of edges connecting vertices to the left with vertices to the right of the H-shaped graph. Thus, bw(G) \u2264 k if and only if mcw(G \u2032 ) \u2264 k \u2032 . All we need to show now is how to modify the components of G \u2032 in order to make it cubic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lemma 1", |
| "sec_num": null |
| }, |
| { |
| "text": "Modifying the vertex components All vertices x of degree 2 of the components corresponding to a vertex in G can be transformed into a vertex of degree 3 by adding five vertices x 1 , . . . , x 5 connected as shown in the middle bar of Figure 5 . Observe that these five vertices can be positioned in the arrangement immediately after x in the order x 1 , x 2 , x 5 , x 3 , x 4 (see the right part of the figure). The resulting maximum modified cutwidth can increase by 2 in correspondence of vertex x 5 . Since the vertices of these components, in the optimal arrangement, have modified cutwidth smaller than 2n 4 + n 3 + n 2 , an increase by 2 is still smaller than the maximum modified cutwidth of the entire graph, which is 3n 4 + O(n 3 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 235, |
| "end": 243, |
| "text": "Figure 5", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lemma 1", |
| "sec_num": null |
| }, |
| { |
| "text": "The vertices of degree 2 of this part of the graph can be modified as in the previous paragraph. Indeed, in the optimal arrangement, these vertices have modified cutwidth smaller than 2n 4 + 2n 3 + n 2 , and an increase by 2 is still smaller than the maximum cutwidth of the entire graph.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modifying the middle bar of the H-shaped graph", |
| "sec_num": null |
| }, |
| { |
| "text": "We replace the two copies of component R(3n 4 , 12n 4 + 12) with two copies of the new component D(3n 4 , 24n 4 + 16) shown in Figure 7 , which is a cubic graph. In order to prove that relation (2) still holds, it suffices to show that the modified cutwidth of the component D(r, c) is still r \u2212 1 whenever r \u2265 3 and c = 8r + 16.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 127, |
| "end": 135, |
| "text": "Figure 7", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modifying the left/right columns of the H-shaped graph", |
| "sec_num": null |
| }, |
| { |
| "text": "We first observe that the linear arrangement obtained by visiting the vertices of D(r, c) from top to bottom and from left to right has modified cutwidth r \u2212 1. Let us now prove that, for any partition of the vertices into two subsets V 1 and V 2 with |V 1 |, |V 2 | \u2265 4r 2 , there exist at least r disjoint paths between vertices of V 1 and vertices of V 2 . To this aim, we distinguish the following three cases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modifying the left/right columns of the H-shaped graph", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Any row has (at least) one vertex in V 1 and one vertex in V 2 : in this case, it is easy to see there exist at least r disjoint paths between vertices of V 1 and vertices of V 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modifying the left/right columns of the H-shaped graph", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 There exist at least 3r 'mixed' columns, that is, columns with (at least) one vertex in V 1 and one vertex in V 2 . Again, it is easy to see that there exist at least r disjoint paths between vertices of V 1 and vertices of V 2 (at least one path every three columns).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modifying the left/right columns of the H-shaped graph", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 The previous two cases do not apply. Hence, there exists a row entirely formed by vertices of V 1 (or, equivalently, of V 2 ). The worst case is when this row is the smallest one, that is, the one with (c\u22123\u22121) 2 + 1 = 4r + 7 vertices. Since at most 3r \u2212 1 columns are mixed, we have that at most (3r \u2212 1)(r \u2212 2) = 3r 2 \u2212 7r + 2 vertices of V 2 are on these mixed columns. Since |V 2 | \u2265 4r 2 , this implies that at least r columns are fully contained in V 2 . On the other hand, at least 4r +7\u2212(3r \u22121) = r +8 columns are fully contained in V 1 . If the V 1 -columns interleave with the V 2 -columns, then there exist at least 2(r \u2212 1) disjoint paths between vertices of V 1 and vertices of V 2 . Otherwise, all the V 1columns precede or follow all the V 2 -columns (this corresponds to the optimal arrangement): in this case, there are r disjoint paths between vertices of V 1 and vertices of V 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modifying the left/right columns of the H-shaped graph", |
| "sec_num": null |
| }, |
| { |
| "text": "Observe now that any linear arrangement partitions the set of vertices in D(r, c) into the sets V 1 , consisting of the first 4r 2 vertices in the arrangement, and V 2 , consisting of all the remaining vertices. Since there are r disjoint paths connecting V 1 and V 2 , there must be at least r\u22121 edges passing over every vertex in the arrangement which is assigned to a position between the (4r 2 + 1)-th and the position 4r 2 + 1 from the right end of the arrangement: thus, the modified cutwidth of any linear arrangement of the vertices of D(r, c) is at least r \u2212 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modifying the left/right columns of the H-shaped graph", |
| "sec_num": null |
| }, |
| { |
| "text": "We can then conclude that the original proof of Makedon et al. (1985) still applies, according to relation (2). We can now reduce from the MODIFIED CUTWIDTH problem for cubic graphs to the MIN TIME STRATEGY problem restricted to head-driven parsing strategies.", |
| "cite_spans": [ |
| { |
| "start": 48, |
| "end": 69, |
| "text": "Makedon et al. (1985)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modifying the left/right columns of the H-shaped graph", |
| "sec_num": null |
| }, |
| { |
| "text": "The MIN TIME STRATEGY problem restricted to head-driven parsing strategies is NPcomplete.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "PROOF We consider hardness first. Let M and k be an input instance of the MODIFIED CUTWIDTH problem restricted to cubic graphs, where M = (V, E) and V = {v 1 , . . . , v n }. We construct an LCFRS production p exactly as in the proof of Theorem 1, with rhs nonterminals H, A 1 , . . . , A n . We also set k \u2032 = 2 \u2022 k + 2 \u2022 |E| + 9.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "Assume now some head-driven parsing strategy \u03c0 for p. After parsing step i \u2208 [n], we have a partial parse D \u03c0 i consisting of the merge of nonterminals H, A \u03c0(1) , . . . , A \u03c0(i) . We write tc(p, \u03c0, i) to denote the exponent of the time complexity due to step i. As already mentioned, this quantity is defined as the sum of the fan-out of the two antecedents involved in the parsing step and the fan-out of its result:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "tc(p, \u03c0, i) = f (D \u03c0 i\u22121 ) + f (A \u03c0(i) ) + f (D \u03c0 i ) .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "Again, we associate with \u03c0 a linear arrangement", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "h \u03c0 of M by letting h \u03c0 (v \u03c0(i) ) = i, for each v i \u2208 V .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "As in the proof of Theorem 1, the fan-out of D \u03c0 i is then related to the cutwidth of the linear arrange-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "ment h \u03c0 of M at position i by f (D \u03c0 i ) = |E| + cw(M, h \u03c0 , i) .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "From the proof of Theorem 1, the fan-out of nonterminal A \u03c0(i) is twice the degree of vertex v \u03c0(i) , denoted by |E(v \u03c0(i) )|. We can then rewrite the above equation in terms of our graph M :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "tc(p, \u03c0, i) = 2 \u2022 |E| + cw(M, h \u03c0 , i \u2212 1) + + 2 \u2022 |E(v \u03c0(i) )| + cw(M, h \u03c0 , i) .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "The following general relation between cutwidth and modified cutwidth is rather intuitive:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "mcw(M, h \u03c0 , i) = 1 2 \u2022 [cw(M, h \u03c0 , i \u2212 1) + \u2212 |E(v \u03c0(i) )| + cw(M, h \u03c0 , i)] .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "Combining the two equations above we obtain:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "tc(p, \u03c0, i) = 2 \u2022 |E| + 3 \u2022 |E(v \u03c0(i) )| + + 2 \u2022 mcw(M, h \u03c0 , i) .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "Because we are restricting M to the class of cubic graphs, we can write:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "tc(p, \u03c0, i) = 2 \u2022 |E| + 9 + 2 \u2022 mcw(M, h \u03c0 , i) .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "We can thus conclude that there exists a head-driven parsing strategy for p with time complexity not greater than 2 \u2022 |E| + 9 + 2 \u2022 k = k \u2032 if and only if mcw(M ) \u2264 k. The membership of MODIFIED CUTWIDTH in NP follows from an argument similar to the one in the proof of Theorem 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "We have established the NP-completeness of both the MIN SPACE STRATEGY and the MIN TIME STRATEGY decision problems. It is now easy to see that the problem of finding a space-or time-optimal parsing strategy for a LCFRS production is NP-hard as well, and thus cannot be solved in polynomial (deterministic) time unless P = NP.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theorem 2", |
| "sec_num": null |
| }, |
| { |
| "text": "Head-driven strategies are important in parsing based on LCFRSs, both in order to allow statistical modeling of head-modifier dependencies and in order to generalize the Markovization of CFG parsers to parsers with discontinuous spans. However, there are n! possible head-driven strategies for an LCFRS production with a head and n modifiers. Choosing among these possible strategies affects both the time and the space complexity of parsing. In this paper we have shown that optimizing the choice according to either metric is NP-hard. To our knowledge, our results are the first NP-hardness results for a grammar factorization problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding remarks", |
| "sec_num": "4" |
| }, |
| { |
| "text": "SCFGs and STAGs are specific instances of LCFRSs. Grammar factorization for synchronous models is an important component of current machine translation systems , and algorithms for factorization have been studied by for SCFGs and by Nesson et al. (2008) for STAGs. These algorithms do not result in what we refer as head-driven strategies, although, as machine translation systems improve, lexicalized rules may become important in this setting as well. However, the results we have presented in this paper do not carry over to the above mentioned synchronous models, since the fan-out of these models is bounded by two, while in our reductions in Section 3 we freely use unbounded values for this parameter. Thus the computational complexity of optimizing the choice of the parsing strategy for SCFGs is still an open problem.", |
| "cite_spans": [ |
| { |
| "start": 233, |
| "end": 253, |
| "text": "Nesson et al. (2008)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding remarks", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Finally, our results for LCFRSs only apply when we restrict ourselves to head-driven strategies. This is in contrast to the findings of Gildea (2011) , which show that, for unrestricted parsing strategies, a polynomial time algorithm for minimizing parsing complexity would imply an improved approximation algorithm for finding the treewidth of general graphs. Our result is stronger, in that it shows strict NPhardness, but also weaker, in that it applies only to head-driven strategies. Whether NP-hardness can be shown for unrestricted parsing strategies is an important question for future work.", |
| "cite_spans": [ |
| { |
| "start": 136, |
| "end": 149, |
| "text": "Gildea (2011)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding remarks", |
| "sec_num": "4" |
| }, |
| { |
| "text": "To be more precise, SCFGs and STAGs generate languages composed by pair of strings, while LCFRSs generate string languages. We can abstract away from this difference by assuming concatenation of components in a string pair.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The first and third authors are partially supported from the Italian PRIN project DISCO. The second author is partially supported by NSF grants IIS-0546554 and IIS-0910611.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Optimal implementation of conjunctive queries in relational data bases", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Ashok", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip", |
| "middle": [ |
| "M" |
| ], |
| "last": "Chandra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Merlin", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "Proc. ninth annual ACM symposium on Theory of computing, STOC '77", |
| "volume": "", |
| "issue": "", |
| "pages": "77--90", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashok K. Chandra and Philip M. Merlin. 1977. Op- timal implementation of conjunctive queries in rela- tional data bases. In Proc. ninth annual ACM sympo- sium on Theory of computing, STOC '77, pages 77-90.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Three generative, lexicalised models for statistical parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proc. 35th Annual Conference of the Association for Computational Linguistics (ACL-97)", |
| "volume": "", |
| "issue": "", |
| "pages": "16--23", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proc. 35th Annual Conference of the Association for Computational Lin- guistics (ACL-97), pages 16-23.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Some NP-complete problems on graphs", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Gavril", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "Proc. 11th Conf. on Information Sciences and Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "91--95", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Gavril. 1977. Some NP-complete problems on graphs. In Proc. 11th Conf. on Information Sciences and Sys- tems, pages 91-95.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Worst-case synchronous grammar rules", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Daniel\u0161tefankovi\u010d", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. 2007 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-07)", |
| "volume": "", |
| "issue": "", |
| "pages": "147--154", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Gildea and Daniel\u0160tefankovi\u010d. 2007. Worst-case synchronous grammar rules. In Proc. 2007 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-07), pages 147- 154, Rochester, NY.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Factoring synchronous grammars by sorting", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| }, |
| { |
| "first": "Hao", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proc. International Conference on Computational Linguistics/Association for Computational Linguistics (COLING/ACL-06) Poster Session", |
| "volume": "", |
| "issue": "", |
| "pages": "279--286", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Gildea, Giorgio Satta, and Hao Zhang. 2006. Factoring synchronous grammars by sorting. In Proc. International Conference on Computational Linguistics/Association for Computational Linguistics (COLING/ACL-06) Poster Session, pages 279-286.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Optimal parsing strategies for Linear Context-Free Rewriting Systems", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. 2010 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-10)", |
| "volume": "", |
| "issue": "", |
| "pages": "769--776", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Gildea. 2010. Optimal parsing strategies for Lin- ear Context-Free Rewriting Systems. In Proc. 2010 Meeting of the North American chapter of the Associa- tion for Computational Linguistics (NAACL-10), pages 769-776.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Grammar factorization by tree decomposition", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Computational Linguistics", |
| "volume": "37", |
| "issue": "1", |
| "pages": "231--248", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Gildea. 2011. Grammar factorization by tree de- composition. Computational Linguistics, 37(1):231- 248.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Optimal reduction of rule length in Linear Context-Free Rewriting Systems", |
| "authors": [ |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "G\u00f3mez-Rodr\u00edguez", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Kuhlmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Weir", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proc. 2009 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-09)", |
| "volume": "", |
| "issue": "", |
| "pages": "539--547", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carlos G\u00f3mez-Rodr\u00edguez, Marco Kuhlmann, Giorgio Satta, and David Weir. 2009. Optimal reduction of rule length in Linear Context-Free Rewriting Systems. In Proc. 2009 Meeting of the North American chap- ter of the Association for Computational Linguistics (NAACL-09), pages 539-547.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Efficient parsing of well-nested linear context-free rewriting systems", |
| "authors": [ |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "G\u00f3mez-Rodr\u00edguez", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Kuhlmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. 2010 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-10)", |
| "volume": "", |
| "issue": "", |
| "pages": "276--284", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carlos G\u00f3mez-Rodr\u00edguez, Marco Kuhlmann, and Gior- gio Satta. 2010. Efficient parsing of well-nested linear context-free rewriting systems. In Proc. 2010 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-10), pages 276- 284, Los Angeles, California.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Introduction to Automata Theory, Languages, and Computation", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hopcroft", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [ |
| "D" |
| ], |
| "last": "Ullman", |
| "suffix": "" |
| } |
| ], |
| "year": 1979, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John E. Hopcroft and Jeffrey D. Ullman. 1979. Intro- duction to Automata Theory, Languages, and Compu- tation. Addison-Wesley, Reading, MA.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Binarization of synchronous context-free grammars", |
| "authors": [ |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Hao", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Computational Linguistics", |
| "volume": "35", |
| "issue": "4", |
| "pages": "559--595", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang Huang, Hao Zhang, Daniel Gildea, and Kevin Knight. 2009. Binarization of synchronous context-free grammars. Computational Linguistics, 35(4):559-595.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Datadriven parsing with probabilistic linear context-free rewriting systems", |
| "authors": [ |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Kallmeyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Maier", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. 23rd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "537--545", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Laura Kallmeyer and Wolfgang Maier. 2010. Data- driven parsing with probabilistic linear context-free rewriting systems. In Proc. 23rd International Con- ference on Computational Linguistics (Coling 2010), pages 537-545.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Treebank grammar techniques for non-projective dependency parsing", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Kuhlmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proc. 12th Conference of the European Chapter of the ACL (EACL-09)", |
| "volume": "", |
| "issue": "", |
| "pages": "478--486", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Kuhlmann and Giorgio Satta. 2009. Treebank grammar techniques for non-projective dependency parsing. In Proc. 12th Conference of the European Chapter of the ACL (EACL-09), pages 478-486.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Black-white pebbles and graph separation", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Lengauer", |
| "suffix": "" |
| } |
| ], |
| "year": 1981, |
| "venue": "Acta Informatica", |
| "volume": "16", |
| "issue": "", |
| "pages": "465--475", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Lengauer. 1981. Black-white pebbles and graph separation. Acta Informatica, 16:465-475.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Treebanks and mild context-sensitivity", |
| "authors": [ |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Maier", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proc. 13th Conference on Formal Grammar (FG-2008)", |
| "volume": "", |
| "issue": "", |
| "pages": "61--76", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wolfgang Maier and Anders S\u00f8gaard. 2008. Treebanks and mild context-sensitivity. In Philippe de Groote, editor, Proc. 13th Conference on Formal Grammar (FG-2008), pages 61-76, Hamburg, Germany. CSLI Publications.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Min cut is NPcomplete for edge weighted trees", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Monien", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [ |
| "H" |
| ], |
| "last": "Sudborough", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Theor. Comput. Sci", |
| "volume": "58", |
| "issue": "", |
| "pages": "209--229", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Monien and I.H. Sudborough. 1988. Min cut is NP- complete for edge weighted trees. Theor. Comput. Sci., 58:209-229.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Optimal k-arization of synchronous tree adjoining grammar", |
| "authors": [ |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Nesson", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| }, |
| { |
| "first": "Stuart", |
| "middle": [ |
| "M" |
| ], |
| "last": "Shieber", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proc. 46th Annual Meeting of the Association for Computational Linguistics (ACL-08)", |
| "volume": "", |
| "issue": "", |
| "pages": "604--612", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rebecca Nesson, Giorgio Satta, and Stuart M. Shieber. 2008. Optimal k-arization of synchronous tree adjoin- ing grammar. In Proc. 46th Annual Meeting of the Association for Computational Linguistics (ACL-08), pages 604-612.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Independent parallelism in finite copying parallel rewriting systems", |
| "authors": [ |
| { |
| "first": "Owen", |
| "middle": [], |
| "last": "Rambow", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Theor. Comput. Sci", |
| "volume": "223", |
| "issue": "1-2", |
| "pages": "87--120", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Owen Rambow and Giorgio Satta. 1999. Independent parallelism in finite copying parallel rewriting sys- tems. Theor. Comput. Sci., 223(1-2):87-120.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Optimal rank reduction for linear context-free rewriting systems with fan-out two", |
| "authors": [ |
| { |
| "first": "Beno\u00eet", |
| "middle": [], |
| "last": "Sagot", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. 48th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "525--533", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Beno\u00eet Sagot and Giorgio Satta. 2010. Optimal rank re- duction for linear context-free rewriting systems with fan-out two. In Proc. 48th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 525-533, Uppsala, Sweden.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Some computational complexity results for synchronous contextfree grammars", |
| "authors": [ |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| }, |
| { |
| "first": "Enoch", |
| "middle": [], |
| "last": "Peserico", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "803--810", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Giorgio Satta and Enoch Peserico. 2005. Some com- putational complexity results for synchronous context- free grammars. In Proceedings of Human Lan- guage Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP), pages 803-810, Vancouver, Canada.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "On multiple context-free grammars", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Seki", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Matsumura", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Fujii", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Kasami", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Theoretical Computer Science", |
| "volume": "88", |
| "issue": "", |
| "pages": "191--229", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Seki, T. Matsumura, M. Fujii, and T. Kasami. 1991. On multiple context-free grammars. Theoretical Com- puter Science, 88:191-229.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Principles and implementation of deductive parsing", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Stuart", |
| "suffix": "" |
| }, |
| { |
| "first": "Yves", |
| "middle": [], |
| "last": "Shieber", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [ |
| "C N" |
| ], |
| "last": "Schabes", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "The Journal of Logic Programming", |
| "volume": "24", |
| "issue": "1-2", |
| "pages": "3--36", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stuart M. Shieber, Yves Schabes, and Fernando C. N. Pereira. 1995. Principles and implementation of de- ductive parsing. The Journal of Logic Programming, 24(1-2):3-36.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Characterizing structural descriptions produced by various grammatical formalisms", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Vijay-Shankar", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "L" |
| ], |
| "last": "Weir", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "K" |
| ], |
| "last": "Joshi", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Proc. 25th Annual Conference of the Association for Computational Linguistics (ACL-87)", |
| "volume": "", |
| "issue": "", |
| "pages": "104--111", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Vijay-Shankar, D. L. Weir, and A. K. Joshi. 1987. Characterizing structural descriptions produced by various grammatical formalisms. In Proc. 25th An- nual Conference of the Association for Computational Linguistics (ACL-87), pages 104-111.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Synchronous binarization for machine translation", |
| "authors": [ |
| { |
| "first": "Hao", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proc. 2006 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-06)", |
| "volume": "", |
| "issue": "", |
| "pages": "256--263", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hao Zhang, Liang Huang, Daniel Gildea, and Kevin Knight. 2006. Synchronous binarization for machine translation. In Proc. 2006 Meeting of the North Ameri- can chapter of the Association for Computational Lin- guistics (NAACL-06), pages 256-263.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "text": "linear context-free rewriting system is a tuple", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Example input graph for our construction of an LCFRS production.", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "num": null, |
| "type_str": "figure", |
| "text": "The R(5, 10) component (left), the modification of its degree 2 vertex x (middle), and the corresponding arrangement (right).", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "num": null, |
| "type_str": "figure", |
| "text": "The optimal arrangement of G \u2032 .", |
| "uris": null |
| }, |
| "FIGREF4": { |
| "num": null, |
| "type_str": "figure", |
| "text": "The D(5, 10) component.", |
| "uris": null |
| } |
| } |
| } |
| } |