| { |
| "paper_id": "D16-1004", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:37:51.709911Z" |
| }, |
| "title": "Using Left-corner Parsing to Encode Universal Structural Constraints in Grammar Induction", |
| "authors": [ |
| { |
| "first": "Hiroshi", |
| "middle": [], |
| "last": "Noji", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Nara Institute of Science and Technology", |
| "location": {} |
| }, |
| "email": "noji@is.naist.jp" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "yusuke@nii.ac.jp" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "mark.johnson@mq.edu.au" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Center-embedding is difficult to process and is known as a rare syntactic construction across languages. In this paper we describe a method to incorporate this assumption into the grammar induction tasks by restricting the search space of a model to trees with limited centerembedding. The key idea is the tabulation of left-corner parsing, which captures the degree of center-embedding of a parse via its stack depth. We apply the technique to learning of famous generative model, the dependency model with valence (Klein and Manning, 2004). Cross-linguistic experiments on Universal Dependencies show that often our method boosts the performance from the baseline, and competes with the current state-ofthe-art model in a number of languages.", |
| "pdf_parse": { |
| "paper_id": "D16-1004", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Center-embedding is difficult to process and is known as a rare syntactic construction across languages. In this paper we describe a method to incorporate this assumption into the grammar induction tasks by restricting the search space of a model to trees with limited centerembedding. The key idea is the tabulation of left-corner parsing, which captures the degree of center-embedding of a parse via its stack depth. We apply the technique to learning of famous generative model, the dependency model with valence (Klein and Manning, 2004). Cross-linguistic experiments on Universal Dependencies show that often our method boosts the performance from the baseline, and competes with the current state-ofthe-art model in a number of languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Human languages in the world are divergent, but they also exhibit many striking similarities (Greenberg, 1963; Hawkins, 2014) . At the level of syntax, one attractive hypothesis for such regularities is that any grammars of languages have evolved under the pressures, or biases, to avoid structures that are difficult to process. For example it is known that many languages have a preference for shorter dependencies (Gildea and Temperley, 2010; Futrell et al., 2015) , which originates from the difficulty in processing longer dependencies (Gibson, 2000) .", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 110, |
| "text": "(Greenberg, 1963;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 111, |
| "end": 125, |
| "text": "Hawkins, 2014)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 417, |
| "end": 445, |
| "text": "(Gildea and Temperley, 2010;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 446, |
| "end": 467, |
| "text": "Futrell et al., 2015)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 541, |
| "end": 555, |
| "text": "(Gibson, 2000)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Such syntactic regularities can also be useful in applications, in particular in unsupervised (Klein and Manning, 2004; Mare\u010dek and\u017dabokrtsk\u00fd, 2012; Bisk and Hockenmaier, 2013) or weaklysupervised (Garrette et al., 2015) grammar induction tasks, where the models try to recover the syntactic structure of language without access to the syntactically annotated data, e.g., from raw or partof-speech tagged text only. In these settings, finding better syntactic regularities universal across languages is essential, as they work as a small cue to the correct linguistic structures. A preference exploited in many previous works is favoring shorter dependencies, which has been encoded in various ways, e.g., initialization of EM (Klein and Manning, 2004) , or model parameters (Smith and Eisner, 2006) , and this has been the key to success of learning (Gimpel and Smith, 2012) .", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 119, |
| "text": "(Klein and Manning, 2004;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 120, |
| "end": 148, |
| "text": "Mare\u010dek and\u017dabokrtsk\u00fd, 2012;", |
| "ref_id": null |
| }, |
| { |
| "start": 149, |
| "end": 176, |
| "text": "Bisk and Hockenmaier, 2013)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 197, |
| "end": 220, |
| "text": "(Garrette et al., 2015)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 727, |
| "end": 752, |
| "text": "(Klein and Manning, 2004)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 775, |
| "end": 799, |
| "text": "(Smith and Eisner, 2006)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 851, |
| "end": 875, |
| "text": "(Gimpel and Smith, 2012)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we explore the utility for another universal syntactic bias that has not yet been exploited in grammar induction: a bias against centerembedding. Center-embedding is a syntactic construction on which a clause is embedded into another one. An example is \"The reporter [who the senator [who Mary met] attacked] ignored the president.\", where \"who Mary met\" is embedded in a larger relative clause. These constructions are known to cause memory overflow (Miller and Chomsky, 1963; Gibson, 2000) , and also are rarely observed crosslinguistically (Karlsson, 2007; Noji and Miyao, 2014) . Our learning method exploits this universal property of language. Intuitively during learning our models explore the restricted search space, which excludes linguistically implausible trees, i.e., those with deeper levels of center-embedding.", |
| "cite_spans": [ |
| { |
| "start": 466, |
| "end": 492, |
| "text": "(Miller and Chomsky, 1963;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 493, |
| "end": 506, |
| "text": "Gibson, 2000)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 558, |
| "end": 574, |
| "text": "(Karlsson, 2007;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 575, |
| "end": 596, |
| "text": "Noji and Miyao, 2014)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We describe how these constraints can be imposed in EM with the inside-outside algorithm. The central Figure 1 : A set of transitions in left-corner parsing. The rules on the right side are the side conditions, in which P is the set of rules of a given CFG.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 102, |
| "end": 110, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "SHIFT \u03c3 d\u22121 a \u2212 \u2192 \u03c3 d\u22121 |A d A \u2192 a \u2208 P SCAN \u03c3 d\u22121 |B/A d a \u2212 \u2192 \u03c3 d\u22121 |B d A \u2192 a \u2208 P PRED \u03c3 d\u22121 |A d \u03b5 \u2212 \u2192 \u03c3 d\u22121 |B/C d B \u2192 A C \u2208 P COMP \u03c3 d\u22121 |D/B d |A d+1 \u03b5 \u2212 \u2192 \u03c3 d\u22121 |D/C d B \u2192 A C \u2208 P", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "idea is to tabulate left-corner parsing, on which its stack depth captures the degree of center-embedding of a partial parse. Each chart item keeps the current stack depth and we discard all items where the depth exceeds some threshold. The technique is general and can be applicable to any model on PCFG; in this paper, specifically, we describe how to apply the idea on the dependency model with valence (DMV) (Klein and Manning, 2004) , a famous generative model for dependency grammar induction. We focus our evaluation on grammar induction from part-of-speech tagged text, comparing the effect of several biases including the one against longer dependencies. Our main empirical finding is that though two biases, avoiding center-embedding and favoring shorter dependencies, are conceptually similar (both favor simpler grammars), often they capture different aspects of syntax, leading to different grammars. In particular our bias cooperates well with additional small syntactic cue such as the one that the sentence root tends to be a verb or a noun, with which our models compete with the strong baseline relying on a larger number of hand crafted rules on POS tags (Naseem et al., 2010) .", |
| "cite_spans": [ |
| { |
| "start": 412, |
| "end": 437, |
| "text": "(Klein and Manning, 2004)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1174, |
| "end": 1195, |
| "text": "(Naseem et al., 2010)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our contributions are: the idea to utilize leftcorner parsing for a tool to constrain the models of syntax (Section 3), the formulation of this idea for DMV (Section 4), and cross-linguistic experiments across 25 languages to evaluate the universality of the proposed approach (Sections 5 and 6).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We first describe (arc-eager) left-corner (LC) parsing as a push-down automaton (PDA), and then reformulate it as a grammar transform. In previous work this algorithm has been called right-corner parsing (e.g., Schuler et al. (2010) ); we avoid this term and instead treat it as a variant of LC parsing following more recent studies, e.g., van Schijndel", |
| "cite_spans": [ |
| { |
| "start": 211, |
| "end": 232, |
| "text": "Schuler et al. (2010)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Left-corner parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "D B i j A j + 1 k COMP = == \u21d2 D B i j C A j + 1 k", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Left-corner parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Figure 2: COMP combines two subtrees on the top of the stack. i, j, k are indices of spans.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Left-corner parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "and Schuler (2013) . The central motivation for this technique is to detect center-embedding in a parse efficiently. We describe this mechanism after providing the algorithm itself. We then give historical notes on LC parsing at the end of this section.", |
| "cite_spans": [ |
| { |
| "start": 4, |
| "end": 18, |
| "text": "Schuler (2013)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Left-corner parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "PDA Let us assume a CFG is given, and it is in CNF. We formulate LC parsing as a set of transitions between configurations, each of which is a pair of the stack and the input position (next input symbol). In Figure 1 a transition \u03c3 1 a \u2212 \u2192 \u03c3 2 means that the stack is changed from \u03c3 1 to \u03c3 2 by reading the next input symbol a. We use a vertical bar to signify the append operation, e.g., \u03c3 = \u03c3 |\u03c3 1 denotes \u03c3 1 is the topmost symbol of \u03c3. Each stack symbol is either a nonterminal, or a pair of nonterminals, e.g., A/B, which represents a subtree rooted at A and is awaiting symbol B. We also decorate each symbol with depth; for example, \u03c3 d\u22121 |A d means the current stack depth is d, and the depth of the topmost symbol in \u03c3 is d \u2212 1. The bottom symbol on the stack is always the empty symbol \u03b5 0 with depth 0. Parsing begins with \u03b5 0 . Given the start symbol of CFG S, it finishes when S 1 is found on the stack.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 208, |
| "end": 216, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Left-corner parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The key transition here is COMP ( Figure 2 ). 1 Basically the algorithm builds a tree by expanding the hypothesis from left to right. In COMP, a subtree rooted at A is combined with the second top subtree (D/B) on the stack. This can be done by first predicting that A's parent symbol is B and its sibling is C; then it unifies two different Bs to combine them. PRED is simpler, and it just predicts the parent and sibling symbols of A. The input symbols are read by SHIFT and SCAN: SHIFT addes a new element on the stack while SCAN fills in the predicted sibling symbol. For an example, Figure 3 ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 34, |
| "end": 42, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 588, |
| "end": 596, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Left-corner parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Step Transition Stack", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "Next input symbol 0 \u03b5 e 1 SHIFT E 1 f 2 PRED D/B 1 f 3 SHIFT D/B 1 F 2 g 4 PRED D/B 1 A/G 2 g 5 SCAN D/B 1 A 2 c 6 COMP D/C 1 c 7 SCAN D 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "Figure 3: Sequence of transitions in LC PDA to parse the tree in Figure 4(a) . this PDA works for parsing a tree in Figure 4 (a).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 65, |
| "end": 76, |
| "text": "Figure 4(a)", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 116, |
| "end": 124, |
| "text": "Figure 4", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "D B C c A G g F f E e (a) D 1 c D/C 1 A 2 g A/G 2 F 2 f D/B 1 E 1 e (b)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "Grammar transform The algorithm above can be reformulated as a grammar transform, which becomes the starting point for our application to grammar induction. This can be done by extracting the operated top symbols on the stack in each transition:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "SHIFT: A d \u2192 a (A \u2192 a \u2208 P ); SCAN: B d \u2192 B/A d a (A \u2192 a \u2208 P ); PRED: B/C d \u2192 A d (B \u2192 A C \u2208 P ); COMP: D/C d \u2192 D/B d A d+1 (B \u2192 A C \u2208 P ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "where a rule on the right side is a condition given the set of rules P in the CFG. Figure 4 shows an example of this transform. The essential point is that each CFG rule in the transformed parse (b) corresponds to a transition in the original algorithm ( Figure 1 ). For example a rule D/C 1 \u2192 D/B 1 A 2 in the parse indicates that the stack configuration D/B 1 |A 2 occurs during parsing (just corresponding to the step 5 in Figure 3 ) and COMP is then applied. This can also be seen as an instantiation of Figure 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 83, |
| "end": 91, |
| "text": "Figure 4", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 255, |
| "end": 263, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 426, |
| "end": 434, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 508, |
| "end": 516, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "Stack depth and center-embedding We use the term center-embedding to distinguish just the tree structures, i.e., ignoring symbols. That is, the tree in Figure 4(a) is the minimal, one degree of centerembedded tree, where the constituent rooted at A is embedded into a larger constituent rooted at D. Multiple, or degree \u2265 2 of center-embedding occurs if this constituent is also embedded into another larger constituent.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 152, |
| "end": 163, |
| "text": "Figure 4(a)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that it is only COMP that consumes the top two symbols on the stack. This means that a larger stack depth occurs only when COMP is needed. Furthermore, from Figure 2 COMP always induces a subtree involving new center-embedding, and this is the underlying mechanism that the stack depth of the algorithm captures the degree of center-embedding.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "One thing to note is that to precisely associate the stack depth and the degree of center-embedding the depth calculation in COMP should be revised as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "COMP: D/C d \u2192 D/B d A d (B \u2192 A C \u2208 P ) d = d (SPANLEN(A) = 1) d + 1 (otherwise),", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "where SPANLEN(A) calculates the span length of the constituent rooted at A, which is 2 in Figure 4 (b). This modification is necessary since COMP for a single token occurs for building purely right-branching structures. 2 Formally, then, given a tree with degree \u03bb of center-embedding the largest stack depth d * during parsing this tree is: Schuler et al. (2010) found that on English treebanks larger stack depth such as 3 or 4 rarely occurs while Noji and Miyao (2014) validated the language universality of this observation through crosslinguistic experiments. These suggest we may utilize LC parsing as a tool for exploiting universal syntactic biases as we discuss in Section 3.", |
| "cite_spans": [ |
| { |
| "start": 342, |
| "end": 363, |
| "text": "Schuler et al. (2010)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 450, |
| "end": 471, |
| "text": "Noji and Miyao (2014)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 90, |
| "end": 98, |
| "text": "Figure 4", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "d * = \u03bb + 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "Historical notes Rosenkrantz and Lewis (1970) first presented the idea of LC parsing as a grammar transform. This is arc-standard, and has no relevance to center-embedding; Resnik (1992) and Johnson (1998) formulated an arc-eager variant by extending this algorithm. The presented algorithm here is the same as Schuler et al. (2010) , and is slightly different from Johnson (1998) . The difference is in the start and end conditions: while our parser begins with an empty symbol, Johnson's parser begins with the predicted start symbol, and finishes with an empty symbol.", |
| "cite_spans": [ |
| { |
| "start": 17, |
| "end": 45, |
| "text": "Rosenkrantz and Lewis (1970)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 173, |
| "end": 186, |
| "text": "Resnik (1992)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 191, |
| "end": 205, |
| "text": "Johnson (1998)", |
| "ref_id": null |
| }, |
| { |
| "start": 311, |
| "end": 332, |
| "text": "Schuler et al. (2010)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 366, |
| "end": 380, |
| "text": "Johnson (1998)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "3 Learning with structural constraints Now we discuss how to utilize LC parsing for grammar induction in general. An important observation in the above transform is that if we perform chart parsing, e.g., CKY, we can detect center-embedded trees efficiently in a chart. For example, by setting a threshold of stack depth \u03b4, we can eliminate any parses involving center-embedding up to degree \u03b4\u22121. Note that in a probabilistic setting, each weight of a transformed rule comes from the corresponding underlying CFG rule (i.e., the condition).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "For learning, our goal is to estimate \u03b8 of a generative model p(z, x|\u03b8) for parse z and its yields (words) x. We take an EM-based simplest approach, and multiply the original model by a constraint factor f (z, x) \u2208 [0, 1] to obtain a new model:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p (z, x|\u03b8) \u221d p(z, x|\u03b8)f (z, x),", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "and then optimize \u03b8 based on p (z, x|\u03b8). This is essentially the same approach as Smith and Eisner (2006) . As shown in Smith (2006), when training with EM we can increase the likelihood of p (z, x|\u03b8) by just using the expected counts from an E-step on the unnormalized distribution p(z, x|\u03b8)f (z, x).", |
| "cite_spans": [ |
| { |
| "start": 92, |
| "end": 105, |
| "text": "Eisner (2006)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "We investigate the following constraints in our experiments:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "f (z, x) = 0 (d * z > \u03b4) 1 (otherwise),", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "where d * z is the largest stack depth for z in LC parsing and \u03b4 is the threshold. This is a hard constraint, and can easily be achieved by removing all chart items (of LC transformed grammar) on which the depth of the symbol exceeds \u03b4. For example, when \u03b4 = 1 the model only explores trees without centerembedding, i.e., right-or left-linear trees.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "Length-based constraints By \u03b4 = 2, the model is allowed to explore trees with one degree of centerembedding. Besides these simple ones, we also investigate relaxing \u03b4 = 1 that results in an intermediate between \u03b4 = 1 and 2. Specifically, we relax the depth calculation in COMP (Eq. 1) as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "d = d (SPANLEN(A) \u2264 \u03be) d + 1 (otherwise),", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "where \u03be \u2265 1 controls the minimal length of a span regarded as embedded into another one. For example, when \u03be = 2, the parse in Figure 4(a) is not regarded as center-embedded because the span length of the constituent reduced by COMP (i.e., A) is 2. This modification is motivated with our observation that in many cases center-embedded constructions arise due to embedding of small chunks, rather than clauses. An example is \"... prepared [the cat 's] dinner\", where \"the cat 's\" is center-embedded in our definition. For this sentence, by relaxing the condition with, e.g., \u03be = 3, we can suppress the increase of stack depth. We treat \u03be as a hyperparameter in our experiments, and in practice, we find that this relaxed constraint leads to higher performance.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 127, |
| "end": 138, |
| "text": "Figure 4(a)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "shows how", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section we discuss how we can formulate the dependency model with valence (DMV) (Klein and Manning, 2004) , a famous generative model for dependency grammar induction, on LC parsing. Though as we will see, applying LC parsing for a dependency model is a little involved compared to simple PCFG models, dependency models have been the central for the grammar induction tasks, and we consider it is most appropriate for assessing effectiveness of our approach.", |
| "cite_spans": [ |
| { |
| "start": 88, |
| "end": 113, |
| "text": "(Klein and Manning, 2004)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency grammar induction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "DMV is a head-outward generative model of a dependency tree, controlled by two types of multinomial distributions. For stop \u2208 {STOP, \u00acSTOP}, \u03b8 S (stop|h, dir, adj) is a Bernoulli random variable to decide whether or not to attach further dependents in dir \u2208 {\u2190, \u2192} direction. The adjacency adj \u2208 {TRUE, FALSE} is the key factor to distinguish the distributions of the first and the other dependents, which is TRUE if h has no dependent yet in dir direction. Another type of parameter is \u03b8 A (a|h, dir), a probability that h takes a as a dependent in dir direction. For this particular model, we take the following approach to formulate it in LC parsing: 1) converting a dependency tree into a binary CFG parse; 2) applying LC transform on it; and 3) encoding DMV parameters into each CFG rule of the transformed grammar. 3 Below we discuss a problem for (1) and(2), and then consider parameterization. 4 Spurious ambiguity The central issue for applying LC parsing is the spurious ambiguity in dependency grammars. That is, there are more than one (binary) CFG parses corresponding to a given dependency tree. This is problematic mainly for two reasons: 1) we cannot specify the degree of centerembedding in a dependency tree uniquely; and 2) this one-to-many mapping prevents the insideoutside algorithm to work correctly (Eisner, 2000) .", |
| "cite_spans": [ |
| { |
| "start": 902, |
| "end": 903, |
| "text": "4", |
| "ref_id": null |
| }, |
| { |
| "start": 1323, |
| "end": 1337, |
| "text": "(Eisner, 2000)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency grammar induction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "X[ran] X[fast] fast X[ran] X[ran] ran X[dogs] dogs (a) X[ran]1 fast X[ran/fast] 1 X[ran]1 ran X[ran/ran]1 X[dogs]1 dogs (b) X[ran] X[ran] X[fast] fast X[ran] ran X[dogs] dogs (c) X[ran] 1 fast X[ran/fast] 1 X[ran] 1 ran X[ran/ran] 1 X[dogs] 1 dogs (d)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency grammar induction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "As a concrete example, Figures 5(a) and 5(c) show two CFG parses corresponding to the dependency tree dogs ran fast. We approach this problem by first providing a grammar transform, which generates all valid LC transformed parses (e.g., Figures 5(b) and 5(d)) and then restricting the grammar 3 Another approach might be just applying the technique in Section 3 to some PCFG that encodes DMV, e.g., Headden III et al. (2009) . The problem with this approach, in particular with split-head grammars (Johnson, 2007) , is that the calculated stack depth no longer reflects the degree of center-embedding in the original parse correctly. As we discuss later, instead, we can speed up inference by applying head-splitting after obtaining the LC transformed grammar.", |
| "cite_spans": [ |
| { |
| "start": 237, |
| "end": 249, |
| "text": "Figures 5(b)", |
| "ref_id": null |
| }, |
| { |
| "start": 399, |
| "end": 424, |
| "text": "Headden III et al. (2009)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 498, |
| "end": 513, |
| "text": "(Johnson, 2007)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 23, |
| "end": 35, |
| "text": "Figures 5(a)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dependency grammar induction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "4 Technical details including the chart algorithm for splithead grammars can be found in the Ph.D. thesis of the first author (Noji, 2016) . Figure 7 : Implicit binarization of the restricted grammar. For each token, if its parent is in the right side (e.g., b), it attaches all left children first. The behavior is opposed when the parent is in its left (e.g., d). A dummy root token is placed at the end. for generating particular parses only.", |
| "cite_spans": [ |
| { |
| "start": 126, |
| "end": 138, |
| "text": "(Noji, 2016)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 141, |
| "end": 149, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dependency grammar induction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "X[w h ] i h j X[w h /wp] i h j p X[wp/wp] i j p", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency grammar induction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Naive method Let us begin with the grammar below, which suffers from the spurious ambiguity:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency grammar induction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "SHIFT: X[w h ] d \u2192 w h SCAN: X[w h ] d \u2192 X[w h /w p ] d w p L-PRED: X[w p /w p ] d \u2192 X[w h ] d (w h w p ); R-PRED: X[w h /w p ] d \u2192 X[w h ] d (w h w p ); L-COMP: X[w h /w p ] d \u2192 X[w h /w p ] d X[w a ] d (w a w p ); R-COMP: X[w h /w a ] d \u2192 X[w h /w p ] d X[w p ] d (w p w a ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency grammar induction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Here X[a/b] denotes X[a]/X[b] while w h denotes the h-th word in the sentence w. We can interpret these rules as the operations on chart items ( Figure 6 ). Note that only PRED and COMP create new dependency arcs and we divide them depending on the direction of the created arcs (L and R). d is calculated by Eq. 4. Note also that for L-COMP and R-COMP h might equal p; X[ran/fast] 1 \u2192 X[ran/ran] 1 X[ran] 2 in Figure 5(d) is such a case for R-COMP.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 145, |
| "end": 154, |
| "text": "Figure 6", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 412, |
| "end": 423, |
| "text": "Figure 5(d)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dependency grammar induction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Removing spurious ambiguity We can show that by restricting conditions for some rules, the spurious ambiguity can be eliminated (the proof is omitted).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency grammar induction", |
| "sec_num": "4" |
| }, |
| { |
| "text": "2. Assume the span of X", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prohibit R-COMP when h = p;", |
| "sec_num": "1." |
| }, |
| { |
| "text": "[w p ] d is (i, j) (i \u2264 p \u2264 j)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prohibit R-COMP when h = p;", |
| "sec_num": "1." |
| }, |
| { |
| "text": ". Then allow R-COMP only when i = p.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prohibit R-COMP when h = p;", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Intuitively, these conditions constraint the order that each word collects its left and right children. For example, by the condition 1, this grammar is prohibited to generate the parse of Figure 5(d) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 189, |
| "end": 200, |
| "text": "Figure 5(d)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Prohibit R-COMP when h = p;", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Binarization Note that two CFG parses in Figures 5(a) and 5(c) differ in how we binarize a given dependency tree. This observation indicates that our restricted grammar implicitly binarizes a dependency tree, and the incurred stack depth (or the degree of center-embedding) is determined based on the structure of the binarized tree. Specifically, we can show that the presented grammar performs optimal binarization; i.e., it minimizes the incurred stack depth. Figure 7 shows an example, which is not regarded as center-embedded in our procedure. In summary, our method detects center-embedding for a dependency tree, but the degree is determined based on the structure of the binarized CFG parse.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 41, |
| "end": 53, |
| "text": "Figures 5(a)", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 463, |
| "end": 471, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Prohibit R-COMP when h = p;", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Parameterization We can encode DMV parameters into each rule. A new arc is introduced by one of {L/R}-{PRED/COMP}, and the stop probabilities can be assigned appropriately in each rule by calculating the valence from indices in the rule. For example, after L-PRED, w h does not take any right dependents so \u03b8 S (stop|w h , \u2192, h = j), where j is the right span index of X[w h ], is multiplied.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prohibit R-COMP when h = p;", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Improvement Though we omit the details, we can improve the time complexity of the above grammar from O(n 6 ) to O(n 4 ) applying the technique similar to Eisner and Satta (1999) without changing the binarization mechanism mentioned above. We implemented this improved grammar.", |
| "cite_spans": [ |
| { |
| "start": 154, |
| "end": 177, |
| "text": "Eisner and Satta (1999)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prohibit R-COMP when h = p;", |
| "sec_num": "1." |
| }, |
| { |
| "text": "A sound evaluation metric in grammar induction is known as an open problem (Schwartz et al., 2011; Bisk and Hockenmaier, 2013) , which essentially arises from the ambiguity in the notion of head. For example, Universal dependencies (UD) is the recent standard in annotation and prefers content words to be heads, but as shown below this is very different from the conventional style, e.g., the one in CoNLL shared tasks (Johansson and Nugues, 2007) : The problem is that both trees are correct under some linguistic theories but the standard metric, unlabeled attachment score (UAS), only takes into account the annotation of the current gold data. Our goal in this experiment is to assess the effect of our structural constraints. To this end, we try to eliminate such arbitrariness in our evaluation as much as possible in the following way:", |
| "cite_spans": [ |
| { |
| "start": 75, |
| "end": 98, |
| "text": "(Schwartz et al., 2011;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 99, |
| "end": 126, |
| "text": "Bisk and Hockenmaier, 2013)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 420, |
| "end": 448, |
| "text": "(Johansson and Nugues, 2007)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Ivan", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 We experiment on UD, in which every treebank follows the consistent UD style annotation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 We restrict the model to explore only trees that follow the UD style annotation during learning 5 , by prohibiting every function word 6 in a sentence to have any dependents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 We calculate UAS in a standard way.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We use UD of version 1.2. Some treebanks are very small, so we select the top 25 largest languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The input to the model is coarse universal POS tags. Punctuations are stripped off. All models are trained on sentences of length \u2264 15 and tested on \u2264 40.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Initialization Much previous work of dependency grammar induction relies on the technique called harmonic initialization, which also biases the model towards shorter dependencies (Klein and Manning, 2004) . Since our focus is to see the effect of structural constraints, we do not try this and initialize models uniformly. However, we add a baseline model with this initialization in our comparison to see the relative strength of our approach.", |
| "cite_spans": [ |
| { |
| "start": 179, |
| "end": 204, |
| "text": "(Klein and Manning, 2004)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Models For the baseline, we employ a variant of DMV with features (Berg-Kirkpatrick et al., 2010), which is simple yet known to boost the performance well. The feature templates are almost the same; the only change is that we add backoff features for STOP probabilities that ignore both direction and adjacency, which we found slightly improves the performance in a preliminary experiment. We set the regularization parameter to 10 though in practice we found the model is less sensitive to this value. We run 100 iterations of EM for each setting. The dif-ference of each model is then the type of constraints imposed during the E-step 7 , or initialization:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 Baseline (FUNC): Function word constraints;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 HARM: FUNC with harmonic initialization;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 DEP: FUNC + stack depth constraints (Eq. 3);", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 LEN: FUNC + soft dependency length bias, which we describe below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For DEP, we use \u03b4 = 1.\u03be to denote the relaxed maximum depth allowing span length up to \u03be (Eq. 4). LEN is the previously explored structural bias (Smith and Eisner, 2006) , which penalizes longer dependencies by modifying each attachment score:", |
| "cite_spans": [ |
| { |
| "start": 156, |
| "end": 169, |
| "text": "Eisner, 2006)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u03b8 A (a|h, dir) = \u03b8 A (a|h, dir) \u2022 e \u2212\u03b3\u2022(|h\u2212a|\u22121) , (5)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "where \u03b3 (\u2265 0) determines the strength of the bias and |h \u2212 a| is (string) distance between h and a.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Note that DEP and LEN are closely related; generally center-embedded constructions are accompanied by longer dependencies so LEN also penalizes center-embedding implicitly. However, the opposite is not true and there exist many constructions with longer dependencies without center-embedding. By comparing these two settings, we discuss the worth of focusing on constraining center-embedding relative to the simpler bias on dependency length.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Finally we also add the system of Naseem et al. (2010) in our comparison. This system encodes many manually crafted rules between POS tags with the posterior regularization technique. For example, the model is encouraged to find NOUN \u2192 ADJ relationship. Our systems cannot access to these core grammatical rules so it is our strongest baseline. 8", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Constraining root word We also see the effects of the constraints when a small amount of grammatical rule is provided. In particular, we restrict the candidate root words of the sentence to a noun or a verb; similar rules have been encoded in past work such as Gimpel and Smith (2012) and the CCG induction system of Bisk and Hockenmaier (2013) . Hyperparameters Selecting hyperparameters in multilingual grammar induction is difficult; some works tune values for each language based on the development set (Smith and Eisner, 2006; Bisk et al., 2015) , but this violates the assumption of unsupervised learning. We instead follow many works (Mare\u010dek and\u017dabokrtsk\u00fd, 2012; Naseem et al., 2010) and select the values with the English data. For this, we use the WSJ data, which we obtain in UD style from the Stanford CoreNLP (ver. 3.6.0). 9", |
| "cite_spans": [ |
| { |
| "start": 261, |
| "end": 284, |
| "text": "Gimpel and Smith (2012)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 317, |
| "end": 344, |
| "text": "Bisk and Hockenmaier (2013)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 507, |
| "end": 531, |
| "text": "(Smith and Eisner, 2006;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 532, |
| "end": 550, |
| "text": "Bisk et al., 2015)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 641, |
| "end": 670, |
| "text": "(Mare\u010dek and\u017dabokrtsk\u00fd, 2012;", |
| "ref_id": null |
| }, |
| { |
| "start": 671, |
| "end": 691, |
| "text": "Naseem et al., 2010)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "6 Experiments WSJ Figure 8 shows the result on WSJ. Both DEP and LEN have one parameter: the maximum depth \u03b4, and \u03b3 (Eq. 5), and the figure shows the sensitivity on them. Note that x-axis = 0 represents FUNC.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 18, |
| "end": 26, |
| "text": "Figure 8", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For LEN, we can see the optimal parameter \u03b3 is 0.1, and degrades the performance when increasing the value; i.e., the small bias is the best. For DEP, we find the best setting is 1.3, i.e., allowing embedded constituents of length 3 or less (\u03be = 3 in Eq. 4). We can see that allowing depth 2 degrades the performance, indicating that depth 2 allows too many trees and does not reduce the search space effectively. 10 Multilingual results Table 1 shows the main multilingual results. When we see \"No root constraint\" block, we notice that our DEP boosts the performance in many languages (e.g., Bulgarian, French, We then move on to the settings with the constraint on root tags. Interestingly, in these settings DEP performs the best. The model competes with Naseem et al.'s system in average, and outperforms it in many languages, e.g., Bulgarian, Czech, etc. LEN, on the other hand, decreases the average score.", |
| "cite_spans": [ |
| { |
| "start": 414, |
| "end": 416, |
| "text": "10", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 438, |
| "end": 445, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Analysis Why does DEP perform well in particular with the restriction on root candidates? To shed light on this, we inspected the output parses of English with no root constraints, and found that the types of errors are very different across constraints. Figure 9 shows a typical example of the difference. One difference between trees is in the constructions of phrase \"On ... pictures\". LEN predicts that \"On the next two\" comprises a constituent, which modifies \"pictures\" while DEP predicts that \"the ... pictures\" comprises a constituent, which is correct, although the head of the determiner is incorrectly predicted. On the other hand, LEN works well to find more primitive dependency arcs between POS tags, such as arcs from verbs to nouns, which are often incorrectly recognized by DEP.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 255, |
| "end": 263, |
| "text": "Figure 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "These observations may partially answer the question above. The main source of improvements by DEP is detections of constituents, but this constraint itself does not help to resolve some core dependency relationships, e.g., arcs from verbs to nouns. The constraint on root POS tags is thus orthogonal to this approach, and it may help to find such core dependencies. On the other hand, the dependency length bias is the most effective to find basic dependency relationships between POS tags while the resulting tree may involve implausible constituents. Thus the effect of the length bias seems somewhat overlapped with the root POS constraints, which may be the reason why they do not well collaborate with each other.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Bracket scores We verify the above intuition quantitatively. To this end, we convert both the predicted and gold dependency trees into the unlabeled bracket structures, and then compare them on the standard PARSEVAL metrics. This bracket tree is not binarized; for example, we extract (X a b (X c d)) from the tree a b c d. Table 2 shows the results, and we can see that DEP always performs the best, showing that DEP leads to the models that find better constituent structures. Of particular note UAS F1 DEP 48.1 30.5 LEN 48.5 27.9 DEP+LEN 49.2 27.0 Table 3 : Average scores of DEP, LEN, and the combination.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 324, |
| "end": 331, |
| "text": "Table 2", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 551, |
| "end": 558, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "is in Enlgish the bracket and dependency scores are only loosely correlated. In Table 1 , UASs for FUNC, DEP, and LEN are 37.2, 39.8, and 52.1, respectively, though F1 of DEP is substantially higher. This suggests that DEP often finds more linguistically plausible structures even when the improvement in UAS is modest. We conjecture that this performance change between constraints essentially arise due to the nature of DEP, which eliminates center-embedding, i.e., implausible constituent structures, rather than dependency arcs.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 80, |
| "end": 87, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Combining DEP and LEN These results suggest DEP and LEN capture different aspects of syntax. To furuther understand this difference, we now evaluate the models with both constraints. Table 3 shows the average scores across languages (without root constraints). Interestingly, the combination (DEP+LEN) performs the best in UAS while the worst in bracket F1. This indicates the ability of DEP to find good constituent boundaries is diminished by combining LEN. We feel the results are expected observing that center-embedded constructions are a special case of longer dependency constructions. In other words, LEN is a stronger constraint than DEP in that the structures penalized by DEP are only a subset of structures penalized by LEN. Thus when LEN and DEP are combined LEN overwhelms, and the advantage of DEP is weakened. This also suggests not penalizing all longer dependencies is important for learning accurate grammars. The improvement of UAS suggests there are also collaborative effects in some aspect.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 183, |
| "end": 190, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We have shown that a syntactic constraint that eliminates center-embedding is helpful in dependency grammar induction. In particular, we found that our method facilitates to find linguistically correct constituent structures, and given an additional cue on dependency, the models compete with the sys-tem relying on a significant amount of prior linguistic knowledge. Future work includes applying our DEP constraint into other PCFG-based grammar induction tasks beyond dependency grammars. In particular, it would be fruitful to apply our idea into constituent structure induction for which, to our knowledge, there has been no successful PCFGbased learning algorithm. As discussed in de Marcken (1999) one reason for the failures of previous work is the lack of necessary syntactic biases, and our approach could be useful to alleviate this issue. Finally, though we have focused on unsupervised learning for simplicity, we believe our syntactic bias also leads to better learning in more practical scenarios, e.g., weakly supervised learning (Garrette et al., 2015) .", |
| "cite_spans": [ |
| { |
| "start": 1045, |
| "end": 1068, |
| "text": "(Garrette et al., 2015)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "van Schijndel and Schuler (2013) employ different transition names, e.g., L-and L+; we avoid them as they are less informative.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Schuler et al. (2010) skip this subtlety by only concerning stack depth after PRED or COMP. We do not take this approach since ours allows a flexible extension described in Section 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We remove the restriction at test time though we found it does not affect the performance.6 A word with one of the following POS tags: ADP, AUX, CONJ, DET, PART, and SCONJ.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We again remove the restrictions at decoding as we observed that the effects are very small.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that the English data in UD is Web Treebank(Silveira et al., 2014), not the standard WSJ Penn treebank.10 We see the same effects when training with longer sentences (e.g., length \u2264 20). This is probably because a looser constraint does nothing for shorter sentences. In other words, the model can restrict the search space only for longer sentences, which are relatively small in the data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": " 8 We encode the customized rules that follow UD scheme. The following 13 rules are used:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "acknowledgement", |
| "sec_num": null |
| }, |
| { |
| "text": "We would like to thank John Pate for the help in preliminary work, as well as Taylor Berg-Kirkpatric for sharing his code. We are also grateful to Edson Miyamoto and Makoto Kanazawa for the valuable feedbacks. The first author was supported by JSPS KAKENHI Gran-in-Aid for JSPS Fellows (Grant Numbers 15J07986), and MOU Grant in National Institute of Informatics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Painless unsupervised learning with features", |
| "authors": [ |
| { |
| "first": "Taylor", |
| "middle": [], |
| "last": "Berg-Kirkpatrick", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Bouchard-C\u00f4t\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Denero", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "582--590", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taylor Berg-Kirkpatrick, Alexandre Bouchard-C\u00f4t\u00e9, John DeNero, and Dan Klein. 2010. Painless unsu- pervised learning with features. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Com- putational Linguistics, pages 582-590, Los Angeles, California, June. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "An hdp model for inducing combinatory categorial grammars", |
| "authors": [ |
| { |
| "first": "Yonatan", |
| "middle": [], |
| "last": "Bisk", |
| "suffix": "" |
| }, |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Hockenmaier", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "75--88", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yonatan Bisk and Julia Hockenmaier. 2013. An hdp model for inducing combinatory categorial grammars. Transactions of the Association for Computational Linguistics, 1:75-88.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Labeled grammar induction with minimal supervision", |
| "authors": [ |
| { |
| "first": "Yonatan", |
| "middle": [], |
| "last": "Bisk", |
| "suffix": "" |
| }, |
| { |
| "first": "Christos", |
| "middle": [], |
| "last": "Christodoulopoulos", |
| "suffix": "" |
| }, |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Hockenmaier", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "2", |
| "issue": "", |
| "pages": "870--876", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yonatan Bisk, Christos Christodoulopoulos, and Julia Hockenmaier. 2015. Labeled grammar induction with minimal supervision. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Pa- pers), pages 870-876, Beijing,China, July.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "On the unsupervised induction of phrase-structure grammars", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "De Marcken", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Natural Language Processing Using Very Large Corpora", |
| "volume": "11", |
| "issue": "", |
| "pages": "191--208", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. de Marcken. 1999. On the unsupervised induction of phrase-structure grammars. In Susan Armstrong, Kenneth Church, Pierre Isabelle, Sandra Manzi, Eve- lyne Tzoukermann, and David Yarowsky, editors, Nat- ural Language Processing Using Very Large Corpora, volume 11 of Text, Speech and Language Technology, pages 191-208. Springer Netherlands.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Efficient parsing for bilexical context-free grammars and head automaton grammars", |
| "authors": [ |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics, ACL '99", |
| "volume": "", |
| "issue": "", |
| "pages": "457--464", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jason Eisner and Giorgio Satta. 1999. Efficient pars- ing for bilexical context-free grammars and head au- tomaton grammars. In Proceedings of the 37th An- nual Meeting of the Association for Computational Linguistics on Computational Linguistics, ACL '99, pages 457-464, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Bilexical Grammars and Their Cubic-Time Parsing Algorithms", |
| "authors": [ |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Advances in Probabilistic and Other Parsing Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "29--62", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jason Eisner. 2000. Bilexical Grammars and Their Cubic-Time Parsing Algorithms. In Harry Bunt and Anton Nijholt, editors, Advances in Probabilistic and Other Parsing Technologies, pages 29-62. Kluwer Academic Publishers, October.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Large-scale evidence of dependency length minimization in 37 languages", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Futrell", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyle", |
| "middle": [], |
| "last": "Mahowald", |
| "suffix": "" |
| }, |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Gibson", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the National Academy of Sciences", |
| "volume": "112", |
| "issue": "", |
| "pages": "10336--10341", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Futrell, Kyle Mahowald, and Edward Gibson. 2015. Large-scale evidence of dependency length minimization in 37 languages. Proceedings of the Na- tional Academy of Sciences, 112(33):10336-10341.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Weakly-supervised grammar-informed bayesian ccg parser learning", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Garrette", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Baldridge", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Garrette, Chris Dyer, Jason Baldridge, and Noah Smith. 2015. Weakly-supervised grammar-informed bayesian ccg parser learning.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "The dependency locality theory: A distance-based theory of linguistic complexity", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Gibson", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Image, language, brain: Papers from the first mind articulation project symposium", |
| "volume": "", |
| "issue": "", |
| "pages": "95--126", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Gibson. 2000. The dependency locality theory: A distance-based theory of linguistic complexity. In Im- age, language, brain: Papers from the first mind artic- ulation project symposium, pages 95-126.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Do grammars minimize dependency length?", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Temperley", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Cognitive Science", |
| "volume": "34", |
| "issue": "2", |
| "pages": "286--310", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Gildea and David Temperley. 2010. Do gram- mars minimize dependency length? Cognitive Sci- ence, 34(2):286-310.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Concavity and initialization for unsupervised dependency parsing", |
| "authors": [ |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Gimpel", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "577--581", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kevin Gimpel and Noah A. Smith. 2012. Concavity and initialization for unsupervised dependency pars- ing. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 577-581, Montr\u00e9al, Canada, June. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Some universals of grammar with particular reference to the order of meaningful elements", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [ |
| "H" |
| ], |
| "last": "Greenberg", |
| "suffix": "" |
| } |
| ], |
| "year": 1963, |
| "venue": "Universals of Human Language", |
| "volume": "", |
| "issue": "", |
| "pages": "73--113", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joseph H. Greenberg. 1963. Some universals of gram- mar with particular reference to the order of meaning- ful elements. In Joseph H. Greenberg, editor, Univer- sals of Human Language, pages 73-113. MIT Press, Cambridge, Mass.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Cross-linguistic variatoin and efficiency", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hawkins", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John A Hawkins. 2014. Cross-linguistic variatoin and efficiency. Oxford University Press, jan.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Improving unsupervised dependency parsing with richer contexts and smoothing", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "William", |
| "suffix": "" |
| }, |
| { |
| "first": "Iii", |
| "middle": [], |
| "last": "Headden", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mc-Closky", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William P. Headden III, Mark Johnson, and David Mc- Closky. 2009. Improving unsupervised dependency parsing with richer contexts and smoothing. In Pro- ceedings of Human Language Technologies: The 2009", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "101--109", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 101-109, Boulder, Colorado, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Finite-state approximation of constraint-based grammars using left-corner grammar transforms", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Johansson", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Nugues", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of NODALIDA 2007", |
| "volume": "", |
| "issue": "", |
| "pages": "619--623", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Johansson and Pierre Nugues. 2007. Extended constituent-to-dependency conversion for English. In Proceedings of NODALIDA 2007, Tartu, Estonia, May. Mark Johnson. 1998. Finite-state approximation of constraint-based grammars using left-corner grammar transforms. In Christian Boitet and Pete Whitelock, editors, COLING-ACL, pages 619-623. Morgan Kauf- mann Publishers / ACL.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Transforming projective bilexical dependency grammars into efficiently-parsable cfgs with unfold-fold", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "168--175", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Johnson. 2007. Transforming projective bilexical dependency grammars into efficiently-parsable cfgs with unfold-fold. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguis- tics, pages 168-175, Prague, Czech Republic, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Constraints on multiple centerembedding of clauses", |
| "authors": [ |
| { |
| "first": "Fred", |
| "middle": [], |
| "last": "Karlsson", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Journal of Linguistics", |
| "volume": "43", |
| "issue": "2", |
| "pages": "365--392", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fred Karlsson. 2007. Constraints on multiple center- embedding of clauses. Journal of Linguistics, 43(2):365-392.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Corpusbased induction of syntactic structure: Models of dependency and constituency", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume", |
| "volume": "", |
| "issue": "", |
| "pages": "478--485", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Klein and Christopher Manning. 2004. Corpus- based induction of syntactic structure: Models of de- pendency and constituency. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume, pages 478-485, Barcelona, Spain, July.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Exploiting reducibility in unsupervised dependency parsing", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mare\u010dek", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zden\u011bk\u017eabokrtsk\u00fd", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "297--307", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Mare\u010dek and Zden\u011bk\u017dabokrtsk\u00fd. 2012. Ex- ploiting reducibility in unsupervised dependency pars- ing. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning, pages 297-307, Jeju Island, Korea, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Finitary models of language users", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [ |
| "A" |
| ], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Chomsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1963, |
| "venue": "Mathematical Psychology", |
| "volume": "", |
| "issue": "", |
| "pages": "2--419", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George A. Miller and Noam Chomsky. 1963. Finitary models of language users. In D. Luce, editor, Hand- book of Mathematical Psychology, pages 2-419. John Wiley & Sons.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Using universal linguistic knowledge to guide grammar induction", |
| "authors": [ |
| { |
| "first": "Tahira", |
| "middle": [], |
| "last": "Naseem", |
| "suffix": "" |
| }, |
| { |
| "first": "Harr", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1234--1244", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tahira Naseem, Harr Chen, Regina Barzilay, and Mark Johnson. 2010. Using universal linguistic knowl- edge to guide grammar induction. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1234-1244, Cambridge, MA, October. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Left-corner transitions on dependency parsing", |
| "authors": [ |
| { |
| "first": "Hiroshi", |
| "middle": [], |
| "last": "Noji", |
| "suffix": "" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "2140--2150", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hiroshi Noji and Yusuke Miyao. 2014. Left-corner transitions on dependency parsing. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2140-2150, Dublin, Ireland, August. Dublin City Uni- versity and Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Left-corner Methods for Syntactic Modeling with Universal Structural Constraints", |
| "authors": [ |
| { |
| "first": "Hiroshi", |
| "middle": [], |
| "last": "Noji", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hiroshi Noji. 2016. Left-corner Methods for Syntac- tic Modeling with Universal Structural Constraints. Ph.D. thesis, Graduate University for Advanced Stud- ies, Tokyo, Japan, March.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Left-corner parsing and psychological plausibility", |
| "authors": [ |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "191--197", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philip Resnik. 1992. Left-corner parsing and psycholog- ical plausibility. In COLING, pages 191-197.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Deterministic left corner parsing", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "J" |
| ], |
| "last": "Rosenkrantz", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [ |
| "M" |
| ], |
| "last": "Lewis", |
| "suffix": "" |
| } |
| ], |
| "year": 1970, |
| "venue": "IEEE Conference Record of 11th Annual Symposium on", |
| "volume": "", |
| "issue": "", |
| "pages": "139--152", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D.J. Rosenkrantz and P.M. Lewis. 1970. Deterministic left corner parsing. In Switching and Automata The- ory, 1970., IEEE Conference Record of 11th Annual Symposium on, pages 139-152, Oct.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Broad-coverage parsing using human-like memory constraints", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Schuler", |
| "suffix": "" |
| }, |
| { |
| "first": "Samir", |
| "middle": [], |
| "last": "Abdelrahman", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "Lane", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Computational Linguistics", |
| "volume": "36", |
| "issue": "1", |
| "pages": "1--30", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William Schuler, Samir AbdelRahman, Tim Miller, and Lane Schwartz. 2010. Broad-coverage parsing using human-like memory constraints. Computational Lin- guistics, 36(1):1-30.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Neutralizing linguistically problematic annotations in unsupervised dependency parsing evaluation", |
| "authors": [ |
| { |
| "first": "Roy", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "Omri", |
| "middle": [], |
| "last": "Abend", |
| "suffix": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "Ari", |
| "middle": [], |
| "last": "Rappoport", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "663--672", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roy Schwartz, Omri Abend, Roi Reichart, and Ari Rap- poport. 2011. Neutralizing linguistically problem- atic annotations in unsupervised dependency parsing evaluation. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies, pages 663-672, Port- land, Oregon, USA, June. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "A gold standard dependency corpus for English", |
| "authors": [ |
| { |
| "first": "Natalia", |
| "middle": [], |
| "last": "Silveira", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Dozat", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Catherine", |
| "middle": [], |
| "last": "De Marneffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "Bowman", |
| "suffix": "" |
| }, |
| { |
| "first": "Miriam", |
| "middle": [], |
| "last": "Connor", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Bauer", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Christopher D. Manning. 2014. A gold standard dependency corpus for English. In Proceed- ings of the Ninth International Conference on Lan- guage Resources and Evaluation (LREC-2014).", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Annealing structural bias in multilingual weighted grammar induction", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Noah", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the International Conference on Computational Linguistics and the Association for Computational Linguistics (COLING-ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "569--576", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noah A. Smith and Jason Eisner. 2006. Annealing structural bias in multilingual weighted grammar in- duction. In Proceedings of the International Confer- ence on Computational Linguistics and the Associ- ation for Computational Linguistics (COLING-ACL), pages 569-576, Sydney, July.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Novel Estimation Methods for Unsupervised Discovery of Latent Structure in Natural Language Text", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Noah", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noah A. Smith. 2006. Novel Estimation Methods for Unsupervised Discovery of Latent Structure in Natu- ral Language Text. Ph.D. thesis, Johns Hopkins Uni- versity, Baltimore, MD, October.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "An analysis of frequency-and memory-based processing costs", |
| "authors": [ |
| { |
| "first": "Marten", |
| "middle": [], |
| "last": "Van Schijndel", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Schuler", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "95--105", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marten van Schijndel and William Schuler. 2013. An analysis of frequency-and memory-based processing costs. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 95-105, Atlanta, Georgia, June. Associa- tion for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "An example of LC transform: (a) the original parse; and (b) the transformed parse.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "text": "Two CFG parses for \"dogs ran fast\" and the results of LC transform ((a) \u2192 (b); (c) \u2192 (d)). X[a/b] is an abbreviation for X[a]/X[b].", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "text": "The senses of the symbols as a chart item. X[w h /w p ] predicts the next dependent outside of the span while X[w p /w p ] predicts the head.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "num": null, |
| "text": "UAS for various settings on (UD) WSJ.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF2": { |
| "content": "<table><tr><td>: Attachment scores on UD with or without</td></tr><tr><td>root POS constraints. A-Greek = Ancient Greek.</td></tr><tr><td>N10 = Naseem et al. (2010) with modified rules.</td></tr><tr><td>Indonesian, and Portuguese), though LEN performs</td></tr><tr><td>equally well and in average, LEN performs slightly</td></tr><tr><td>better. Harmonic initialization does not work well.</td></tr></table>", |
| "num": null, |
| "text": "", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "content": "<table/>", |
| "num": null, |
| "text": "Unlabeled bracket scores in various settings. Avg. is the average score across languages.", |
| "html": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |