ACL-OCL / Base_JSON /prefixN /json /N09 /N09-1012.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N09-1012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:42:58.668560Z"
},
"title": "Improving Unsupervised Dependency Parsing with Richer Contexts and Smoothing",
"authors": [
{
"first": "William",
"middle": [
"P"
],
"last": "Headden",
"suffix": "",
"affiliation": {
"laboratory": "Brown Laboratory for Linguistic Information Processing (BLLIP) Brown University Providence",
"institution": "",
"location": {
"postCode": "02912",
"region": "RI"
}
},
"email": "headdenw@cs.brown.edu"
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": "",
"affiliation": {
"laboratory": "Brown Laboratory for Linguistic Information Processing (BLLIP) Brown University Providence",
"institution": "",
"location": {
"postCode": "02912",
"region": "RI"
}
},
"email": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": "",
"affiliation": {
"laboratory": "Brown Laboratory for Linguistic Information Processing (BLLIP) Brown University Providence",
"institution": "",
"location": {
"postCode": "02912",
"region": "RI"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Unsupervised grammar induction models tend to employ relatively simple models of syntax when compared to their supervised counterparts. Traditionally, the unsupervised models have been kept simple due to tractability and data sparsity concerns. In this paper, we introduce basic valence frames and lexical information into an unsupervised dependency grammar inducer and show how this additional information can be leveraged via smoothing. Our model produces state-of-theart results on the task of unsupervised grammar induction, improving over the best previous work by almost 10 percentage points.",
"pdf_parse": {
"paper_id": "N09-1012",
"_pdf_hash": "",
"abstract": [
{
"text": "Unsupervised grammar induction models tend to employ relatively simple models of syntax when compared to their supervised counterparts. Traditionally, the unsupervised models have been kept simple due to tractability and data sparsity concerns. In this paper, we introduce basic valence frames and lexical information into an unsupervised dependency grammar inducer and show how this additional information can be leveraged via smoothing. Our model produces state-of-theart results on the task of unsupervised grammar induction, improving over the best previous work by almost 10 percentage points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The last decade has seen great strides in statistical natural language parsing. Supervised and semisupervised methods now provide highly accurate parsers for a number of languages, but require training from corpora hand-annotated with parse trees. Unfortunately, manually annotating corpora with parse trees is expensive and time consuming so for languages and domains with minimal resources it is valuable to study methods for parsing without requiring annotated sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we focus on unsupervised dependency parsing. Our goal is to produce a directed graph of dependency relations (e.g. Figure 1) where each edge indicates a head-argument relation. Since the task is unsupervised, we are not given any examples of correct dependency graphs and only take words and their parts of speech as input. Most of the recent work in this area Cohen et al., 2008) has focused on variants of the The big dog barks Dependency Model with Valence (DMV) by Klein and Manning (2004) . DMV was the first unsupervised dependency grammar induction system to achieve accuracy above a right-branching baseline. However, DMV is not able to capture some of the more complex aspects of language. Borrowing some ideas from the supervised parsing literature, we present two new models: Extended Valence Grammar (EVG) and its lexicalized extension (L-EVG). The primary difference between EVG and DMV is that DMV uses valence information to determine the number of arguments a head takes but not their categories. In contrast, EVG allows different distributions over arguments for different valence slots. L-EVG extends EVG by conditioning on lexical information as well. This allows L-EVG to potentially capture subcategorizations. The downside of adding additional conditioning events is that we introduce data sparsity problems. Incorporating more valence and lexical information increases the number of parameters to estimate. A common solution to data sparsity in supervised parsing is to add smoothing. We show that smoothing can be employed in an unsupervised fashion as well, and show that mixing DMV, EVG, and L-EVG together produces state-ofthe-art results on this task. To our knowledge, this is the first time that grammars with differing levels of detail have been successfully combined for unsupervised dependency parsing.",
"cite_spans": [
{
"start": 375,
"end": 394,
"text": "Cohen et al., 2008)",
"ref_id": "BIBREF1"
},
{
"start": 483,
"end": 507,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 129,
"end": 138,
"text": "Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A brief overview of the paper follows. In Section 2, we discuss the relevant background. Section 3 presents how we will extend DMV with additional features. We describe smoothing in an unsupervised context in Section 4. In Section 5, we discuss search issues. We present our experiments in Section 6 and conclude in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, the observed variables will be a corpus of n sentences of text s = s 1 . . . s n , and for each word s ij an associated part-of-speech \u03c4 ij . We denote the set of all words as V w and the set of all parts-ofspeech as V \u03c4 . The hidden variables are parse trees t = t 1 . . . t n and parameters\u03b8 which specify a distribution over t. A dependency tree t i is a directed acyclic graph whose nodes are the words in s i . The graph has a single incoming edge for each word in each sentence, except one called the root of t i . An edge from word i to word j means that word j is an argument of word i or alternatively, word i is the head of word j. Note that each word token may be the argument of at most one head, but a head may have several arguments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "If parse tree t i can be drawn on a plane above the sentence with no crossing edges, it is called projective. Otherwise it is nonprojective. As in previous work, we restrict ourselves to projective dependency trees. The dependency models in this paper will be formulated as a particular kind of Probabilistic Context Free Grammar (PCFG), described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In order to perform smoothing, we will find useful a class of PCFGs in which the probabilities of certain rules are required to be the same. This will allow us to make independence assumptions for smoothing purposes without losing information, by giving analogous rules the same probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "Let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "G = (N , T , S, R, \u03b8) be a Probabilistic Con- text Free Grammar with nonterminal symbols N , terminal symbols T , start symbol S \u2208 N , set of productions R of the form N \u2192 \u03b2, N \u2208 N , \u03b2 \u2208 (N \u222a T ) * . Let R N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "indicate the subset of R whose left-hand sides are N . \u03b8 is a vector of length |R|, indexed by productions N \u2192 \u03b2 \u2208 R. \u03b8 N \u2192\u03b2 specifies the probability that N rewrites to \u03b2. We will let \u03b8 N indicate the subvector of \u03b8 corresponding to R N .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "A tied PCFG constrains a PCFG G with a tying relation, which is an equivalence relation over rules that satisfies the following properties:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "1. Tied rules have the same probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "2. Rules expanding the same nonterminal are never tied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "3. If N 1 \u2192 \u03b2 1 and N 2 \u2192 \u03b2 2 are tied then the tying relation defines a one-to-one mapping between rules in R N 1 and R N 2 , and we say that N 1 and N 2 are tied nonterminals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "As we see below, we can estimate tied PCFGs using standard techniques. Clearly, the tying relation also defines an equivalence class over nonterminals. Let f (t, r) denote the number of times rule r appears in tree t, and let f (t,r) = r\u2208r f (t, r). We see that the complete data likelihood is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "P (s, t|\u03b8) = r\u2208R r\u2208r \u03b8 f (t,r) r = r\u2208R\u03b8 f (t,r) r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "That is, the likelihood is a product of multinomials, one for each nonterminal equivalence class, and there are no constraints placed on the parameters of these multinomials besides being positive and summing to one. This means that all the standard estimation methods (e.g. Expectation Maximization, Variational Bayes) extend directly to tied PCFGs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "Maximum likelihood estimation provides a point estimate of\u03b8. However, often we want to incorporate information about\u03b8 by modeling its prior distribution. As a prior, for eachN \u2208N we will specify a Dirichlet distribution over\u03b8N with hyperparameters \u03b1N . The Dirichlet has the density function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "P (\u03b8N |\u03b1N ) = \u0393( r\u2208RN \u03b1r) r\u2208RN \u0393(\u03b1r) r\u2208RN\u03b8 \u03b1r\u22121 r ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "Thus the prior over\u03b8 is a product of Dirichlets,which is conjugate to the PCFG likelihood function . That is, the posterior P (\u03b8|s, t, \u03b1) is also a product of Dirichlets, also factoring into a Dirichlet for each nonterminalN , where the parameters \u03b1r are augmented by the number of times rul\u0113 r is observed in tree t:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "P (\u03b8|s, t, \u03b1) \u221d P (s, t|\u03b8)P (\u03b8|\u03b1) \u221d r\u2208R\u03b8 f (t,r)+\u03b1r \u22121 r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "We can see that \u03b1r acts as a pseudocount of the number of timesr is observed prior to t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "To make use of this prior, we use the Variational Bayes (VB) technique for PCFGs with Dirichlet Priors presented by Kurihara and Sato (2004) . VB estimates a distribution over\u03b8. In contrast, Expectation Maximization estimates merely a point estimate of\u03b8. In VB, one estimates Q(t,\u03b8), called the variational distribution, which approximates the posterior distribution P (t,\u03b8|s, \u03b1) by minimizing the KL divergence of P from Q. Minimizing the KL divergence, it turns out, is equivalent to maximizing a lower bound F of the log marginal likelihood log P (s|\u03b1).",
"cite_spans": [
{
"start": 116,
"end": 140,
"text": "Kurihara and Sato (2004)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "log P (s|\u03b1) \u2265 t \u03b8 Q(t,\u03b8) log P (s, t,\u03b8|\u03b1) Q(t,\u03b8) = F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "The negative of the lower bound, \u2212F, is sometimes called the free energy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "As is typical in variational approaches, Kurihara and Sato (2004) make certain independence assumptions about the hidden variables in the variational posterior, which will make estimating it simpler. It factors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "Q(t,\u03b8) = Q(t)Q(\u03b8) = n i=1 Q i (t i ) N \u2208N Q(\u03b8N ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "The goal is to recover Q(\u03b8), the estimate of the posterior distribution over parameters and Q(t), the estimate of the posterior distribution over trees. Finding a local maximum of F is done via an alternating maximization of Q(\u03b8) and Q(t). Kurihara and Sato (2004) show that each Q(\u03b8N ) is a Dirichlet distribution with parameter\u015d \u03b1 r = \u03b1 r + E Q(t) f (t, r).",
"cite_spans": [
{
"start": 240,
"end": 264,
"text": "Kurihara and Sato (2004)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tied Probabilistic Context Free Grammars",
"sec_num": "2.1"
},
{
"text": "In the sections that follow, we frame various dependency models as a particular variety of CFGs known as split-head bilexical CFGs (Eisner and Satta, 1999) . These allow us to use the fast Eisner and Satta (1999) parsing algorithm to compute the expectations required by VB in O(m 3 ) time (Eisner and Blatz, 2007; Johnson, 2007) where m is the length of the sentence. 1 In the split-head bilexical CFG framework, each nonterminal in the grammar is annotated with a terminal symbol. For dependency grammars, these annotations correspond to words and/or parts-ofspeech. Additionally, split-head bilexical CFGs require that each word s ij in sentence s i is represented in a split form by two terminals called its left part s ijL and right part s ijR . The set of these parts constitutes the terminal symbols of the grammar. This split-head property relates to a particular type of dependency grammar in which the left and right dependents of a head are generated independently. Note that like CFGs, split-head bilexical CFGs can be made probabilistic.",
"cite_spans": [
{
"start": 131,
"end": 155,
"text": "(Eisner and Satta, 1999)",
"ref_id": "BIBREF4"
},
{
"start": 189,
"end": 212,
"text": "Eisner and Satta (1999)",
"ref_id": "BIBREF4"
},
{
"start": 302,
"end": 314,
"text": "Blatz, 2007;",
"ref_id": "BIBREF3"
},
{
"start": 315,
"end": 329,
"text": "Johnson, 2007)",
"ref_id": "BIBREF7"
},
{
"start": 369,
"end": 370,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Split-head Bilexical CFGs",
"sec_num": "2.2"
},
{
"text": "The most successful recent work on dependency induction has focused on the Dependency Model with Valence (DMV) by Klein and Manning (2004) . DMV is a generative model in which the head of the sentence is generated and then each head recursively generates its left and right dependents. The arguments of head H in direction d are generated by repeatedly deciding whether to generate another new argument or to stop and then generating the argument if required. The probability of deciding whether to generate another argument is conditioned on H, d and whether this would be the first argument (this is the sense in which it models valence). When DMV generates an argument, the part-of-speech of that argument A is generated given H and d. Figure 2 : Rule schema for DMV. For brevity, we omit the portion of the grammar that handles the right arguments since they are symmetric to the left (all rules are the same except for the attachment rule where the RHS is reversed). val \u2208 {0, 1} indicates whether we have made any attachments.",
"cite_spans": [
{
"start": 114,
"end": 138,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 739,
"end": 747,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dependency Model with Valence",
"sec_num": "2.3"
},
{
"text": "LH \u2192 HL STOP | dir = L, head = H, val = 0 LH \u2192 L 1 H CONT | dir = L, head = H, val = 0 L \u2032 H \u2192 HL STOP | dir = L, head = H, val = 1 L \u2032 H \u2192 L 1 H CONT | dir = L, head = H, val = 1 L 1 H \u2192 YA L \u2032 H Arg A | dir = L, head = H",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Model with Valence",
"sec_num": "2.3"
},
{
"text": "The grammar schema for this model is shown in Figure 2 . The first rule generates the root of the sentence. Note that these rules are for \u2200H, A \u2208 V \u03c4 so there is an instance of the first schema rule for each part-of-speech. Y H splits words into their left and right components. L H encodes the stopping decision given that we have not generated any arguments so far. L \u2032 H encodes the same decision after generating one or more arguments. L 1 H represents the distribution over left attachments. To extract dependency relations from these parse trees, we scan for attachment rules (e.g., L 1 H \u2192 Y A L \u2032 H ) and record that A depends on H. The schema omits the rules for right arguments since they are symmetric. We show a parse of \"The big dog barks\" in Figure 3 . 2 Much of the extensions to this work have focused on estimation procedures. Klein and Manning (2004) use Expectation Maximization to estimate the model parameters. Smith and Eisner (2005) and Smith (2006) investigate using Contrastive Estimation to estimate DMV. Contrastive Estimation maximizes the conditional probability of the observed sentences given a neighborhood of similar unseen sequences. The results of this approach vary widely based on regularization and neighborhood, but often outperforms EM.",
"cite_spans": [
{
"start": 767,
"end": 768,
"text": "2",
"ref_id": null
},
{
"start": 844,
"end": 868,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF9"
},
{
"start": 942,
"end": 955,
"text": "Eisner (2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 46,
"end": 54,
"text": "Figure 2",
"ref_id": null
},
{
"start": 756,
"end": 764,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dependency Model with Valence",
"sec_num": "2.3"
},
{
"text": "2 Note that our examples use words as leaf nodes but in our unlexicalized models, the leaf nodes are in fact parts Figure 3 : DMV split-head bilexical CFG parse of \"The big dog barks.\" Smith (2006) also investigates two techniques for maximizing likelihood while incorporating the locality bias encoded in the harmonic initializer for DMV. One technique, skewed deterministic annealing, ameliorates the local maximum problem by flattening the likelihood and adding a bias towards the Klein and Manning initializer, which is decreased during learning. The second technique is structural annealing (Smith and Eisner, 2006; which penalizes long dependencies initially, gradually weakening the penalty during estimation. If hand-annotated dependencies on a held-out set are available for parameter selection, this performs far better than EM; however, performing parameter selection on a held-out set without the use of gold dependencies does not perform as well.",
"cite_spans": [
{
"start": 607,
"end": 620,
"text": "Eisner, 2006;",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 115,
"end": 123,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dependency Model with Valence",
"sec_num": "2.3"
},
{
"text": "-of-speech. S Y barks L barks L 1 barks Y dog L dog L 1 dog Y T he L T he The L R T he The R L \u2032 dog L 1 dog Y big L big big L R big big R L \u2032 dog dog L R dog dog R L \u2032 barks barks L R barks barks R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Model with Valence",
"sec_num": "2.3"
},
{
"text": "Cohen et al. (2008) investigate using Bayesian Priors with DMV. The two priors they use are the Dirichlet (which we use here) and the Logistic Normal prior, which allows the model to capture correlations between different distributions. They initialize using the harmonic initializer of Klein and Manning (2004) . They find that the Logistic Normal distribution performs much better than the Dirichlet with this initialization scheme.",
"cite_spans": [
{
"start": 287,
"end": 311,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Model with Valence",
"sec_num": "2.3"
},
{
"text": "Cohen and Smith 2009 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Model with Valence",
"sec_num": "2.3"
},
{
"text": "STOP | dir = L, head = H, val = 0 LH \u2192 L \u2032 H CONT | dir = L, head = H, val = 0 L \u2032 H \u2192 L 1 H STOP | dir = L, head = H, val = 1 L \u2032 H \u2192 L 2 H CONT | dir = L, head = H, val = 1 L 2 H \u2192 YA L \u2032 H Arg A | dir = L, head = H, val = 1 L 1 H \u2192 YA HL Arg A | dir = L, head = H, val = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Model with Valence",
"sec_num": "2.3"
},
{
"text": "Figure 4: Extended Valence Grammar schema. As before, we omit rules involving the right parts of words. In this case, val \u2208 {0, 1} indicates whether we are generating the nearest argument (0) or not (1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Model with Valence",
"sec_num": "2.3"
},
{
"text": "rently with our work) an extension of this, the Shared Logistic Normal prior, which allows different PCFG rule distributions to share components. They use this machinery to investigate smoothing the attachment distributions for (nouns/verbs), and for learning using multiple languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Model with Valence",
"sec_num": "2.3"
},
{
"text": "DMV models the distribution over arguments identically without regard to their order. Instead, we propose to distinguish the distribution over the argument nearest the head from the distribution of subsequent arguments. 3 Consider the following changes to the DMV grammar (results shown in Figure 4 ). First, we will introduce the rule L 2 H \u2192 Y A L \u2032 H to denote the decision of what argument to generate for positions not nearest to the head. Next, instead of having L \u2032 H expand to H L or L 1 H , we will expand it to L 1 H (attach to nearest argument and stop) or L 2 H (attach to nonnearest argument and continue). We call this the Extended Valence Grammar (EVG).",
"cite_spans": [
{
"start": 220,
"end": 221,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 290,
"end": 298,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Enriched Contexts",
"sec_num": "3"
},
{
"text": "As a concrete example, consider the phrase \"the big hungry dog\" (Figure 5 ). We would expect that distribution over the nearest left argument for \"dog\" to be different than farther left arguments. The fig-3 McClosky (2008) explores this idea further in an unsmoothed grammar. Figure 5 : An example of moving from DMV to EVG for a fragment of \"The big dog.\" Boxed nodes indicate changes. The key difference is that EVG distinguishes between the distributions over the argument nearest the head (big) from arguments farther away (The).",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 73,
"text": "(Figure 5",
"ref_id": null
},
{
"start": 201,
"end": 206,
"text": "fig-3",
"ref_id": null
},
{
"start": 276,
"end": 284,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Enriched Contexts",
"sec_num": "3"
},
{
"text": ". . . L dog L 1 dog Y T he The L The R L \u2032 dog L 1 dog Y big big L big R L \u2032 dog dog L . . . L dog L \u2032 dog L 2 dog Y T he The L The R L \u2032 dog L 1 dog Y big big L big R dog L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enriched Contexts",
"sec_num": "3"
},
{
"text": "ure shows that EVG allows these two distributions to be different (nonterminals L 2 dog and L 1 dog ) whereas DMV forces them to be equivalent (both use L 1 dog as the nonterminal).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Enriched Contexts",
"sec_num": "3"
},
{
"text": "All of the probabilistic models discussed thus far have incorporated only part-of-speech information (see Footnote 2). In supervised parsing of both dependencies and constituency, lexical information is critical (Collins, 1999) . We incorporate lexical information into EVG (henceforth L-EVG) by extending the distributions over argument parts-of-speech A to condition on the head word h in addition to the head part-of-speech H, direction d and argument position v. The argument word a distribution is merely conditioned on part-of-speech A; we leave refining this model to future work.",
"cite_spans": [
{
"start": 212,
"end": 227,
"text": "(Collins, 1999)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicalization",
"sec_num": "3.1"
},
{
"text": "In order to incorporate lexicalization, we extend the EVG CFG to allow the nonterminals to be annotated with both the word and part-of-speech of the head. We first remove the old rules Y H \u2192 L H R H for each H \u2208 V \u03c4 . Then we mark each nonterminal which is annotated with a part-of-speech as also annotated with its head, with a single exception: Y H . We add a new nonterminal Y H,h for each H \u2208 V \u03c4 , h \u2208 V w , and the rules Y H \u2192 Y H,h and Y H,h \u2192 L H,h R H,h . The rule Y H \u2192 Y H,h corresponds to selecting the word, given its part-ofspeech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicalization",
"sec_num": "3.1"
},
{
"text": "In supervised estimation one common smoothing technique is linear interpolation, (Jelinek, 1997) . This section explains how linear interpolation can be represented using a PCFG with tied rule probabilities, and how one might estimate smoothing parameters in an unsupervised framework.",
"cite_spans": [
{
"start": 81,
"end": 96,
"text": "(Jelinek, 1997)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "In many probabilistic models it is common to estimate the distribution of some event x conditioned on some set of context information P (x|N (1) . . . N (k) ) by smoothing it with less complicated conditional distributions.",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 144,
"text": "(x|N (1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "Using linear interpolation we model P (x|N (1) . . . N (k) ) as a weighted average of two distributions \u03bb 1 P 1 (x|N (1) , . . . , N (k) ) + \u03bb 2 P 2 (x|N (1) , . . . , N (k\u22121) ), where the distribution P 2 makes an independence assumption by dropping the conditioning event N (k) .",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 46,
"text": "(x|N (1)",
"ref_id": "FIGREF0"
},
{
"start": 112,
"end": 120,
"text": "(x|N (1)",
"ref_id": "FIGREF0"
},
{
"start": 147,
"end": 157,
"text": "2 (x|N (1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "In a PCFG a nonterminal N can encode a collection of conditioning events N (1) . . . N (k) , and \u03b8 N determines a distribution conditioned on N (1) . . . N (k) over events represented by the rules r \u2208 R N . For example, in EVG the nonterminal L 1 N N encodes three separate pieces of conditioning information: the direction d = left, the head part-of-speech H = NN , and the argument position v = 0; \u03b8 L 1 NN \u2192Y J J NN L represents the probability of generating JJ as the first left argument of NN . Suppose in EVG we are interested in smoothing P (A | d, H, v) with a component that excludes the head conditioning event. Using linear interpolation, this would be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "P (A | d, H, v) = \u03bb 1 P 1 (A | d, H, v)+\u03bb 2 P 2 (A | d, v)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "We will estimate PCFG rules with linearly interpolated probabilities by creating a tied PCFG which is extended by adding rules that select between the main distribution P 1 and the backoff distribution P 2 , and also rules that correspond to draws from those distributions. We will make use of tied rule probabilities to make the independence assumption in the backoff distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "We still use the original grammar to parse the sentence. However, we estimate the parameters in the extended grammar and then translate them back into the original grammar for parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "More formally, suppose B \u2286 N is a set of nonterminals (called the backoff set) with conditioning events N (1) . . . N (k\u22121) in common (differing in a conditioning event N (k) ), and with rule sets of the same cardinality. If G is our model's PCFG, we can define a new tied PCFG",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "G \u2032 = (N \u2032 , T , S, R \u2032 , \u03c6), where N \u2032 = N \u222a N b \u2113 | N \u2208 B, \u2113 \u2208 {1, 2}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": ", meaning for each nonterminal N in the backoff set we add two nonterminals N b 1 , N b 2 representing each distribution P 1 and P 2 . The new rule set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "R \u2032 = (\u222a N \u2208N \u2032 R \u2032 N ) where for all N \u2208 B rule set R \u2032 N = N \u2192 N b \u2113 | \u2113 \u2208 {1, 2}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": ", meaning at N in G \u2032 we decide which distribution P 1 , P 2 to use; and for N \u2208 B and \u2113 \u2208 {1, 2} ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "R \u2032 N b \u2113 = N b \u2113 \u2192 \u03b2 | N \u2192 \u03b2 \u2208 R N indicating a draw from distribution P \u2113 . For nonterminals N \u2208 B, R \u2032 N = R N .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "Finally, for each N, M \u2208 B we specify a tying relation between the rules in R \u2032 N b 2 and R \u2032 M b 2 , grouping together analogous rules. This has the effect of making an independence assumption about P 2 , namely that it ignores the conditioning event N (k) , drawing from a common distribution each time a nonterminal N b 2 is rewritten.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "For example, in EVG to smooth P (A =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "DT | d = left, H = NN , v = 0) with P 2 (A = DT | d = left, v = 0) we define the backoff set to be L 1 H | H \u2208 V \u03c4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "In the extended grammar we define the tying relation to form rule equivalence classes by the argument they generate, i.e. for each argument A \u2208 V \u03c4 , we have a rule equivalence class",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "L 1b 2 H \u2192 Y A H L | H \u2208 V \u03c4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "We can see that in grammar G \u2032 each N \u2208 B eventually ends up rewriting to one of N 's expansions \u03b2 in G. There are two indirect paths, one through N b 1 and one through N b 2 . Thus this defines the probability of N \u2192 \u03b2 in G, \u03b8 N \u2192\u03b2 , as the probability of rewriting N as \u03b2 in G \u2032 via N b 1 and N b 2 . That is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "\u03b8 N \u2192\u03b2 = \u03c6 N \u2192N b 1 \u03c6 N b 1 \u2192\u03b2 + \u03c6 N \u2192N b 2 \u03c6 N b 2 \u2192\u03b2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "The example in Figure 6 shows the probability that L 1 dog rewrites to Y big dog L in grammar G. Typically when smoothing we need to incorporate the prior knowledge that conditioning events that have been seen fewer times should be more strongly smoothed. We accomplish this by setting the Dirichlet hyperparameters for each N \u2192 N b 1 , N \u2192 N b 2 decision to (K, 2K), where K = |R N b 1 | is the number of rewrite rules for A. This ensures that the model will only start to ignore the backoff distribu- Figure 6 : Using linear interpolation to smooth L 1 dog \u2192 Y big dog L : The first component represents the distribution fully conditioned on head dog, while the second component represents the distribution ignoring the head conditioning event. This later is accomplished by tying the rule",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "Figure 6",
"ref_id": null
},
{
"start": 503,
"end": 511,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "P G 0 B B @ L 1 dog Y big dog L 1 C C A = P G \u2032 0 B B B B B B B @ L 1 dog L 1b 1 dog Y big dog L 1 C C C C C C C A + P G \u2032 0 B B B B B B B @ L 1 dog L 1b 2 dog Y big dog L 1 C C C C C C C A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "L 1b2 dog \u2192 Y big dog L to, for instance, L 1b2 cat \u2192 Y big cat L , L 1b2 f ish \u2192 Y big f ish L etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "tion after having seen a sufficiently large number of training examples. 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing",
"sec_num": "4"
},
{
"text": "Our first experiments examine smoothing the distributions over an argument in the DMV and EVG models. In DMV we smooth the probability of argument A given head part-of-speech H and direction d with a distribution that ignores H. In EVG, which conditions on H, d and argument position v we back off two ways. The first is to ignore v and use backoff conditioning event H, d. This yields a backoff distribution with the same conditioning information as the argument distribution from DMV. We call this EVG smoothed-skip-val. The second possibility is to have the backoff distribution ignore the head part-of-speech H and use backoff conditioning event v, d. This assumes that arguments share a common distribution across heads. We call this EVG smoothed-skip-head. As we see below, backing off by ignoring the part-ofspeech of the head H worked better than ignoring the argument position v.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Dependency Models",
"sec_num": "4.1"
},
{
"text": "For L-EVG we smooth the argument part-ofspeech distribution (conditioned on the head word) with the unlexicalized EVG smoothed-skip-head model. Klein and Manning (2004) strongly emphasize the importance of smart initialization in getting good performance from DMV. The likelihood function is full of local maxima and different initial parameter values yield vastly different quality solutions. They offer what they call a \"harmonic initializer\" which initializes the attachment probabilities to favor arguments that appear more closely in the data. This starts EM in a state preferring shorter attachments.",
"cite_spans": [
{
"start": 144,
"end": 168,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Dependency Models",
"sec_num": "4.1"
},
{
"text": "Since our goal is to expand the model to incorporate lexical information, we want an initialization scheme which does not depend on the details of DMV. The method we use is to create M sets of B random initial settings and to run VB some small number of iterations (40 in all our experiments) for each initial setting. For each of the M sets, the model with the best free energy of the B runs is then run out until convergence (as measured by likelihood of a held-out data set); the other models are pruned away. In this paper we use B = 20 and M = 50.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization and Search issues",
"sec_num": "5"
},
{
"text": "For the bth setting, we draw a random sample from the prior\u03b8 (b) . We set the initial Q(t) = P (t|s,\u03b8 (b) ) which can be calculated using the Expectation-Maximization E-Step. Q(\u03b8) is then initialized using the standard VB M-step.",
"cite_spans": [
{
"start": 61,
"end": 64,
"text": "(b)",
"ref_id": null
},
{
"start": 102,
"end": 105,
"text": "(b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization and Search issues",
"sec_num": "5"
},
{
"text": "For the Lexicalized-EVG, we modify this procedure slightly, by first running M B smoothed EVG models for 40 iterations each and selecting the best model in each cohort as before; each L-EVG distribution is initialized from its corresponding EVG distribution. The new P (A|h, H, d, v) distributions are set initially to their corresponding P (A|H, d, v) values.",
"cite_spans": [],
"ref_spans": [
{
"start": 341,
"end": 352,
"text": "(A|H, d, v)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Initialization and Search issues",
"sec_num": "5"
},
{
"text": "We trained on the standard Penn Treebank WSJ corpus (Marcus et al., 1993) . Following Klein and Manning (2002) , sentences longer than 10 words after removing punctuation are ignored. We refer to this variant as WSJ10. Following Cohen et al. (2008), we train on sections 2-21, used 22 as a held-out development corpus, and present results evaluated on section 23. The models were all trained using Variational Bayes, and initialized as described in Section 5. To evaluate, we follow Cohen et al. (2008) in using the mean of the variational posterior Dirichlets as a point estimate\u03b8 \u2032 . For the unsmoothed models we decode by selecting the Viterbi parse given\u03b8 \u2032 , or argmax t P (t|s,\u03b8 \u2032 ).",
"cite_spans": [
{
"start": 52,
"end": 73,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF11"
},
{
"start": 86,
"end": 110,
"text": "Klein and Manning (2002)",
"ref_id": "BIBREF8"
},
{
"start": 483,
"end": 502,
"text": "Cohen et al. (2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "For the smoothed models we find the Viterbi parse of the unsmoothed CFG, but use the smoothed probabilities. We evaluate against the gold standard dependencies for section 23, which were extracted from the phrase structure trees using the standard rules by Yamada and Matsumoto (2003) . We measure the percent accuracy of the directed dependency edges. For the lexicalized model, we replaced all words that were seen fewer than 100 times with \"UNK.\" We ran each of our systems 10 times, and report the average directed accuracy achieved. The results are shown in Table 1 . We compare to work by Cohen et al. (2008) and Cohen and Smith (2009) .",
"cite_spans": [
{
"start": 257,
"end": 284,
"text": "Yamada and Matsumoto (2003)",
"ref_id": "BIBREF16"
},
{
"start": 595,
"end": 614,
"text": "Cohen et al. (2008)",
"ref_id": "BIBREF1"
},
{
"start": 619,
"end": 641,
"text": "Cohen and Smith (2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 563,
"end": 570,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Looking at Table 1 , we can first of all see the benefit of randomized initialization over the harmonic initializer for DMV. We can also see a large gain by adding smoothing to DMV, topping even the logistic normal prior. The unsmoothed EVG actually performs worse than unsmoothed DMV, but both smoothed versions improve even on smoothed DMV. Adding lexical information (L-EVG) yields a moderate further improvement.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "As the greatest improvement comes from moving to model EVG smoothed-skip-head, we show in Table 2 the most probable arguments for each val, dir, using the mean of the appropriate variational Dirichlet. For d = right, v = 1, P (A|v, d) largely seems to acts as a way of grouping together various verb types, while for d = lef t, v = 0 the model finds that nouns tend to act as the closest left argument. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "We present a smoothing technique for unsupervised PCFG estimation which allows us to explore more sophisticated dependency grammars. Our method combines linear interpolation with a Bayesian prior that ensures the backoff distribution receives probability mass. Estimating the smoothed model requires running the standard Variational Bayes on an extended PCFG. We used this technique to estimate a series of dependency grammars which extend DMV with additional valence and lexical information. We found that both were helpful in learning English dependency grammars. Our L-EVG model gives the best reported accuracy to date on the WSJ10 corpus. Future work includes using lexical information more deeply in the model by conditioning argument words and valence on the lexical head. We suspect that successfully doing so will require using much larger datasets. We would also like to explore using our smoothing technique in other models such as HMMs. For instance, we could do unsupervised HMM part-of-speech induction by smooth a tritag model with a bitag model. Finally, we would like to learn the parts-of-speech in our dependency model from text and not rely on the gold-standard tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Efficiently parsable versions of split-head bilexical CFGs for the models described in this paper can be derived using the fold-unfold grammar transform(Eisner and Blatz, 2007;Johnson, 2007).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We set the other Dirichlet hyperparameters to 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is based upon work supported by National Science Foundation grants 0544127 and 0631667 and DARPA GALE contract HR0011-06-2-0001. We thank members of BLLIP for their feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction",
"authors": [
{
"first": "B",
"middle": [],
"last": "Shay",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shay B. Cohen and Noah A. Smith. 2009. Shared lo- gistic normal distributions for soft parameter tying in unsupervised grammar induction. In Proceedings of NAACL-HLT 2009.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Logistic normal priors for unsupervised probabilistic grammar induction",
"authors": [
{
"first": "B",
"middle": [],
"last": "Shay",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Gimpel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2008,
"venue": "Advances in Neural Information Processing Systems 21",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shay B. Cohen, Kevin Gimpel, and Noah A. Smith. 2008. Logistic normal priors for unsupervised prob- abilistic grammar induction. In Advances in Neural Information Processing Systems 21.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Head-driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "The University of Pennsylvania",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1999. Head-driven Statistical Models for Natural Language Parsing. Ph.D. thesis, The Uni- versity of Pennsylvania.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Program transformations for optimization of parsing algorithms and other weighted logic programs",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Blatz",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 11th Conference on Formal Grammar",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner and John Blatz. 2007. Program transforma- tions for optimization of parsing algorithms and other weighted logic programs. In Proceedings of the 11th Conference on Formal Grammar.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Efficient parsing for bilexical context-free grammars and headautomaton grammars",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner and Giorgio Satta. 1999. Efficient pars- ing for bilexical context-free grammars and head- automaton grammars. In Proceedings of ACL 1999.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Statistical Methods for Speech Recognition",
"authors": [
{
"first": "Frederick",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frederick Jelinek. 1997. Statistical Methods for Speech Recognition. The MIT Press, Cambridge, Mas- sachusetts.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bayesian inference for PCFGs via Markov chain Monte Carlo",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson, Thomas L. Griffiths, and Sharon Goldwa- ter. 2007. Bayesian inference for PCFGs via Markov chain Monte Carlo. In Proceedings of NAACL 2007.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Transforming projective bilexical dependency grammars into efficiently-parsable CFGs with unfold-fold",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson. 2007. Transforming projective bilexical dependency grammars into efficiently-parsable CFGs with unfold-fold. In Proceedings of ACL 2007.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A generative constituent-context model for improved grammar induction",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher Manning. 2002. A genera- tive constituent-context model for improved grammar induction. In Proceedings of ACL 2002.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Corpusbased induction of syntactic structure: Models of dependency and constituency",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ACL 2004",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher Manning. 2004. Corpus- based induction of syntactic structure: Models of de- pendency and constituency. In Proceedings of ACL 2004, July.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An application of the variational bayesian approach to probabilistics context-free grammars",
"authors": [
{
"first": "Kenichi",
"middle": [],
"last": "Kurihara",
"suffix": ""
},
{
"first": "Taisuke",
"middle": [],
"last": "Sato",
"suffix": ""
}
],
"year": 2004,
"venue": "IJCNLP 2004 Workshop Beyond Shallow Analyses",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenichi Kurihara and Taisuke Sato. 2004. An applica- tion of the variational bayesian approach to probabilis- tics context-free grammars. In IJCNLP 2004 Work- shop Beyond Shallow Analyses.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated cor- pus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Modeling valence effects in unsupervised grammar induction",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky. 2008. Modeling valence effects in un- supervised grammar induction. Technical Report CS- 09-01, Brown University, Providence, RI, USA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Guiding unsupervised grammar induction using contrastive estimation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2005,
"venue": "International Joint Conference on Artificial Intelligence Workshop on Grammatical Inference Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah A. Smith and Jason Eisner. 2005. Guiding unsu- pervised grammar induction using contrastive estima- tion. In International Joint Conference on Artificial Intelligence Workshop on Grammatical Inference Ap- plications.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Annealing structural bias in multilingual weighted grammar induction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of COLING-ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah A. Smith and Jason Eisner. 2006. Annealing struc- tural bias in multilingual weighted grammar induction. In Proceedings of COLING-ACL 2006.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Novel Estimation Methods for Unsupervised Discovery of Latent Structure in Natural Language Text",
"authors": [
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah A. Smith. 2006. Novel Estimation Methods for Unsupervised Discovery of Latent Structure in Natural Language Text. Ph.D. thesis, Department of Computer Science, Johns Hopkins University.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Statistical dependency analysis with support vector machines",
"authors": [
{
"first": "Hiroyasu",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In In Proceedings of the International Workshop on Pars- ing Technologies.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Example dependency parse.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Select H as root YH \u2192 LH RH Move to split-head representation",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF3": {
"text": "",
"content": "<table><tr><td>: Directed accuracy (DA) for WSJ10, section 23.</td></tr><tr><td>*, \u2020 indicate results reported by Cohen et al. (2008), Co-</td></tr><tr><td>hen and Smith (2009) respectively. Standard deviations</td></tr><tr><td>over 10 runs are given in parentheses</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF5": {
"text": "Most likely arguments given valence and direction, according to smoothing distribution P (arg|dir, val) in EVG smoothed-skip-head model with lowest free energy.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}