| { |
| "paper_id": "N07-1018", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:47:30.634546Z" |
| }, |
| "title": "Bayesian Inference for PCFGs via Markov chain Monte Carlo", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Cognitive and Linguistic Sciences Brown University", |
| "location": {} |
| }, |
| "email": "johnson@brown.edu" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [ |
| "L" |
| ], |
| "last": "Griffiths", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of California", |
| "location": { |
| "settlement": "Berkeley" |
| } |
| }, |
| "email": "griffiths@berkeley.edu" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stanford University", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper presents two Markov chain Monte Carlo (MCMC) algorithms for Bayesian inference of probabilistic context free grammars (PCFGs) from terminal strings, providing an alternative to maximum-likelihood estimation using the Inside-Outside algorithm. We illustrate these methods by estimating a sparse grammar describing the morphology of the Bantu language Sesotho, demonstrating that with suitable priors Bayesian techniques can infer linguistic structure in situations where maximum likelihood methods such as the Inside-Outside algorithm only produce a trivial grammar.", |
| "pdf_parse": { |
| "paper_id": "N07-1018", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper presents two Markov chain Monte Carlo (MCMC) algorithms for Bayesian inference of probabilistic context free grammars (PCFGs) from terminal strings, providing an alternative to maximum-likelihood estimation using the Inside-Outside algorithm. We illustrate these methods by estimating a sparse grammar describing the morphology of the Bantu language Sesotho, demonstrating that with suitable priors Bayesian techniques can infer linguistic structure in situations where maximum likelihood methods such as the Inside-Outside algorithm only produce a trivial grammar.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The standard methods for inferring the parameters of probabilistic models in computational linguistics are based on the principle of maximum-likelihood estimation; for example, the parameters of Probabilistic Context-Free Grammars (PCFGs) are typically estimated from strings of terminals using the Inside-Outside (IO) algorithm, an instance of the Expectation Maximization (EM) procedure (Lari and Young, 1990) . However, much recent work in machine learning and statistics has turned away from maximum-likelihood in favor of Bayesian methods, and there is increasing interest in Bayesian methods in computational linguistics as well (Finkel et al., 2006) . This paper presents two Markov chain Monte Carlo (MCMC) algorithms for inferring PCFGs and their parses from strings alone. These can be viewed as Bayesian alternatives to the IO algorithm.", |
| "cite_spans": [ |
| { |
| "start": 389, |
| "end": 411, |
| "text": "(Lari and Young, 1990)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 635, |
| "end": 656, |
| "text": "(Finkel et al., 2006)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The goal of Bayesian inference is to compute a distribution over plausible parameter values. This \"posterior\" distribution is obtained by combining the likelihood with a \"prior\" distribution P(\u03b8) over parameter values \u03b8. In the case of PCFG inference \u03b8 is the vector of rule probabilities, and the prior might assert a preference for a sparse grammar (see below). The posterior probability of each value of \u03b8 is given by Bayes' rule: P(\u03b8|D) \u221d P(D|\u03b8)P(\u03b8).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In principle Equation 1 defines the posterior probability of any value of \u03b8, but computing this may not be tractable analytically or numerically. For this reason a variety of methods have been developed to support approximate Bayesian inference. One of the most popular methods is Markov chain Monte Carlo (MCMC), in which a Markov chain is used to sample from the posterior distribution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper presents two new MCMC algorithms for inferring the posterior distribution over parses and rule probabilities given a corpus of strings. The first algorithm is a component-wise Gibbs sampler which is very similar in spirit to the EM algorithm, drawing parse trees conditioned on the current parameter values and then sampling the parameters conditioned on the current set of parse trees. The second algorithm is a component-wise Hastings sampler that \"collapses\" the probabilistic model, integrating over the rule probabilities of the PCFG, with the goal of speeding convergence. Both algo-rithms use an efficient dynamic programming technique to sample parse trees.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Given their usefulness in other disciplines, we believe that Bayesian methods like these are likely to be of general utility in computational linguistics as well. As a simple illustrative example, we use these methods to infer morphological parses for verbs from Sesotho, a southern Bantu language with agglutinating morphology. Our results illustrate that Bayesian inference using a prior that favors sparsity can produce linguistically reasonable analyses in situations in which EM does not.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The rest of this paper is structured as follows. The next section introduces the background for our paper, summarizing the key ideas behind PCFGs, Bayesian inference, and MCMC. Section 3 introduces our first MCMC algorithm, a Gibbs sampler for PCFGs. Section 4 describes an algorithm for sampling trees from the distribution over trees defined by a PCFG. Section 5 shows how to integrate out the rule weight parameters \u03b8 in a PCFG, allowing us to sample directly from the posterior distribution over parses for a corpus of strings. Finally, Section 6 illustrates these methods in learning Sesotho morphology.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Let G = (T, N, S, R) be a Context-Free Grammar in Chomsky normal form with no useless productions, where T is a finite set of terminal symbols, N is a finite set of nonterminal symbols (disjoint from T ), S \u2208 N is a distinguished nonterminal called the start symbol, and R is a finite set of productions of the form A \u2192 B C or A \u2192 w, where A, B, C \u2208 N and w \u2208 T . In what follows we use \u03b2 as a variable ranging over (N \u00d7 N ) \u222a T .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probabilistic context-free grammars", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "A Probabilistic Context-Free Grammar (G, \u03b8) is a pair consisting of a context-free grammar G and a real-valued vector \u03b8 of length |R| indexed by productions, where \u03b8 A\u2192\u03b2 is the production probability associated with the production A \u2192 \u03b2 \u2208 R. We require that \u03b8 A\u2192\u03b2 \u2265 0 and that for all nonterminals A \u2208 N , A\u2192\u03b2\u2208R \u03b8 A\u2192\u03b2 = 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probabilistic context-free grammars", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "A PCFG (G, \u03b8) defines a probability distribution over trees t as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probabilistic context-free grammars", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "P G (t|\u03b8) = r\u2208R \u03b8 fr(t) r", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probabilistic context-free grammars", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "where t is generated by G and f r (t) is the number of times the production r = A \u2192 \u03b2 \u2208 R is used in the derivation of t. If G does not generate t let P G (t|\u03b8) = 0. The yield y(t) of a parse tree t is the sequence of terminals labeling its leaves. The probability of a string w \u2208 T + of terminals is the sum of the probability of all trees with yield w, i.e.:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probabilistic context-free grammars", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "P G (w|\u03b8) = t:y(t)=w P G (t|\u03b8).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probabilistic context-free grammars", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Given a corpus of strings w = (w 1 , . . . , w n ), where each w i is a string of terminals generated by a known CFG G, we would like to be able to infer the production probabilities \u03b8 that best describe that corpus. Taking w to be our data, we can apply Bayes' rule (Equation 1) to obtain:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bayesian inference for PCFGs", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "P(\u03b8|w) \u221d P G (w|\u03b8)P(\u03b8), where P G (w|\u03b8) = n i=1 P G (w i |\u03b8).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bayesian inference for PCFGs", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Using t to denote a sequence of parse trees for w, we can compute the joint posterior distribution over t and \u03b8, and then marginalize over t, with P(\u03b8|w) = t P(t, \u03b8|w). The joint posterior distribution on t and \u03b8 is given by:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bayesian inference for PCFGs", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "P(t, \u03b8|w) \u221d P(w|t)P(t|\u03b8)P(\u03b8) = n i=1 P(w i |t i )P(t i |\u03b8) P(\u03b8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bayesian inference for PCFGs", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "with P(w i |t i ) = 1 if y(t i ) = w i , and 0 otherwise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bayesian inference for PCFGs", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The first step towards computing the posterior distribution is to define a prior on \u03b8. We take P(\u03b8) to be a product of Dirichlet distributions, with one distribution for each non-terminal A \u2208 N . The prior is parameterized by a positive real valued vector \u03b1 indexed by productions R, so each production probability \u03b8 A\u2192\u03b2 has a corresponding Dirichlet parameter \u03b1 A\u2192\u03b2 . Let R A be the set of productions in R with left-hand side A, and let \u03b8 A and \u03b1 A refer to the component subvectors of \u03b8 and \u03b1 respectively indexed by productions in R A . The Dirichlet prior P D (\u03b8|\u03b1) is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dirichlet priors", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P D (\u03b8|\u03b1) = A\u2208N P D (\u03b8 A |\u03b1 A ), where P D (\u03b8 A |\u03b1 A ) = 1 C(\u03b1 A ) r\u2208R A \u03b8 \u03b1r\u22121 r and C(\u03b1 A ) = r\u2208R A \u0393(\u03b1 r ) \u0393( r\u2208R A \u03b1 r )", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Dirichlet priors", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "where \u0393 is the generalized factorial function and C(\u03b1) is a normalization constant that does not depend on \u03b8 A . Dirichlet priors are useful because they are conjugate to the distribution over trees defined by a PCFG. This means that the posterior distribution on \u03b8 given a set of parse trees, P(\u03b8|t, \u03b1), is also a Dirichlet distribution. Applying Bayes' rule,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dirichlet priors", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "P G (\u03b8|t, \u03b1) \u221d P G (t|\u03b8) P D (\u03b8|\u03b1) \u221d r\u2208R \u03b8 fr(t) r r\u2208R \u03b8 \u03b1r\u22121 r = r\u2208R \u03b8 fr(t)+\u03b1r \u22121 r", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dirichlet priors", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "which is a Dirichlet distribution with parameters f (t) + \u03b1, where f (t) is the vector of production counts in t indexed by r \u2208 R. We can thus write:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dirichlet priors", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "P G (\u03b8|t, \u03b1) = P D (\u03b8|f (t) + \u03b1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dirichlet priors", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "which makes it clear that the production counts combine directly with the parameters of the prior.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dirichlet priors", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Having defined a prior on \u03b8, the posterior distribution over t and \u03b8 is fully determined by a corpus w. Unfortunately, computing the posterior probability of even a single choice of t and \u03b8 is intractable, as evaluating the normalizing constant for this distribution requires summing over all possible parses for the entire corpus and all sets of production probabilities. Nonetheless, it is possible to define algorithms that sample from this distribution using Markov chain Monte Carlo (MCMC). MCMC algorithms construct a Markov chain whose states s \u2208 S are the objects we wish to sample. The state space S is typically astronomically large -in our case, the state space includes all possible parses of the entire training corpus w -and the transition probabilities P(s \u2032 |s) are specified via a scheme guaranteed to converge to the desired distribution \u03c0(s) (in our case, the posterior distribution). We \"run\" the Markov chain (i.e., starting in initial state s 0 , sample a state s 1 from P(s \u2032 |s 0 ), then sample state s 2 from P(s \u2032 |s 1 ), and so on), with the probability that the Markov chain is in a particular state,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Markov chain Monte Carlo", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "P(s i ), converging to \u03c0(s i ) as i \u2192 \u221e.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Markov chain Monte Carlo", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "After the chain has run long enough for it to approach its stationary distribution, the expectation E \u03c0 [f ] of any function f (s) of the state s will be approximated by the average of that function over the set of sample states produced by the algorithm. For example, in our case, given samples (t i , \u03b8 i ) for i = 1, . . . , \u2113 produced by an MCMC algorithm, we can estimate \u03b8 as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Markov chain Monte Carlo", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "E \u03c0 [\u03b8] \u2248 1 \u2113 \u2113 i=1 \u03b8 i", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Markov chain Monte Carlo", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "The remainder of this paper presents two MCMC algorithms for PCFGs. Both algorithms proceed by setting the initial state of the Markov chain to a guess for (t, \u03b8) and then sampling successive states using a particular transition matrix. The key difference betwen the two algorithms is the form of the transition matrix they assume.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Markov chain Monte Carlo", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "The Gibbs sampler (Geman and Geman, 1984) is one of the simplest MCMC methods, in which transitions between states of the Markov chain result from sampling each component of the state conditioned on the current value of all other variables. In our case, this means alternating between sampling from two distributions:", |
| "cite_spans": [ |
| { |
| "start": 18, |
| "end": 41, |
| "text": "(Geman and Geman, 1984)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Gibbs sampler for P(t, \u03b8|w, \u03b1)", |
| "sec_num": "3" |
| }, |
| { |
| "text": "P(t|\u03b8, w, \u03b1) = n i=1 P(t i |w i , \u03b8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Gibbs sampler for P(t, \u03b8|w, \u03b1)", |
| "sec_num": "3" |
| }, |
| { |
| "text": ", and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Gibbs sampler for P(t, \u03b8|w, \u03b1)", |
| "sec_num": "3" |
| }, |
| { |
| "text": "P(\u03b8|t, w, \u03b1) = P D (\u03b8|f (t) + \u03b1) = A\u2208N P D (\u03b8 A |f A (t) + \u03b1 A ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Gibbs sampler for P(t, \u03b8|w, \u03b1)", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Thus every two steps we generate a new sample of t and \u03b8. This alternation between parsing and updating \u03b8 is reminiscent of the EM algorithm, with the Expectation step replaced by sampling t and the Maximization step replaced by sampling \u03b8.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Gibbs sampler for P(t, \u03b8|w, \u03b1)", |
| "sec_num": "3" |
| }, |
| { |
| "text": "t i t 1 t n w 1 w i w n \u03b8 Aj . . . \u03b8 A1 . . . \u03b8 A |N | \u03b1 A1 . . . . . . \u03b1 Aj \u03b1 A |N | . . . . . . . . . . . .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Gibbs sampler for P(t, \u03b8|w, \u03b1)", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The dependencies among variables in a PCFG are depicted graphically in Figure 1 , which makes clear that the Gibbs sampler is highly parallelizable (just like the EM algorithm). Specifically, the parses t i are independent given \u03b8 and so can be sampled in parallel from the following distribution as described in the next section.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 71, |
| "end": 79, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Gibbs sampler for P(t, \u03b8|w, \u03b1)", |
| "sec_num": "3" |
| }, |
| { |
| "text": "P G (t i |w i , \u03b8) = P G (t i |\u03b8) P G (w i |\u03b8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Gibbs sampler for P(t, \u03b8|w, \u03b1)", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We make use of the fact that the posterior is a product of independent Dirichlet distributions in order to sample \u03b8 from P D (\u03b8|t, \u03b1). The production probabilities \u03b8 A for each nonterminal A \u2208 N are sampled from a Dirchlet distibution with parameters \u03b1 \u2032 A = f A (t) + \u03b1 A . There are several methods for sampling \u03b8 = (\u03b8 1 , . . . , \u03b8 m ) from a Dirichlet distribution with parameters \u03b1 = (\u03b1 1 , . . . , \u03b1 m ), with the simplest being sampling x j from a Gamma(\u03b1 j ) distribution for j = 1, . . . , m and then setting \u03b8 j = x j / m k=1 x k (Gentle, 2003) .", |
| "cite_spans": [ |
| { |
| "start": 540, |
| "end": 554, |
| "text": "(Gentle, 2003)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Gibbs sampler for P(t, \u03b8|w, \u03b1)", |
| "sec_num": "3" |
| }, |
| { |
| "text": "This section completes the description of the Gibbs sampler for (t, \u03b8) by describing a dynamic programming algorithm for sampling trees from the set of parses for a string generated by a PCFG. This algorithm appears fairly widely known: it was described by Goodman (1998) and Finkel et al (2006) and used by Ding et al (2005) , and is very similar to other dynamic programming algorithms for CFGs, so we only summarize it here. The algorithm consists of two steps. The first step constructs a standard \"inside\" table or chart, as used in the Inside-Outside algorithm for PCFGs (Lari and Young, 1990) . The second step involves a recursion from larger to smaller strings, sampling from the productions that expand each string and constructing the corresponding tree in a top-down fashion.", |
| "cite_spans": [ |
| { |
| "start": 257, |
| "end": 271, |
| "text": "Goodman (1998)", |
| "ref_id": null |
| }, |
| { |
| "start": 276, |
| "end": 295, |
| "text": "Finkel et al (2006)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 308, |
| "end": 325, |
| "text": "Ding et al (2005)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 577, |
| "end": 599, |
| "text": "(Lari and Young, 1990)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In this section we take w to be a string of terminal symbols w = (w 1 , . . . , w n ) where each w i \u2208 T , and define w i,k = (w i+1 , . . . , w k ) (i.e., the substring from w i+1 up to w k ). Further, let G A = (T, N, A, R) , i.e., a CFG just like G except that the start symbol has been replaced with A, so, P G A (t|\u03b8) is the probability of a tree t whose root node is labeled A and P G A (w|\u03b8) is the sum of the probabilities of all trees whose root nodes are labeled A with yield w.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 213, |
| "end": 225, |
| "text": "(T, N, A, R)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The Inside algorithm takes as input a PCFG (G, \u03b8) and a string w = w 0,n and constructs a table with entries p A,i,k for each A \u2208 N and 0 \u2264 i < k \u2264 n, where p A,i,k = P G A (w i,k |\u03b8), i.e., the probability of A rewriting to w i,k . The table entries are recursively defined below, and computed by enumerating all feasible i, k and A in any order such that all smaller values of k \u2212 i are enumerated before any larger values.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "p A,k\u22121,k = \u03b8 A\u2192w k p A,i,k = A\u2192B C\u2208R i<j<k \u03b8 A\u2192B C p B,i,j p C,j,k", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "for all A, B, C \u2208 N and 0 \u2264 i < j < k \u2264 n. At the end of the Inside algorithm, P G (w|\u03b8) = p S,0,n .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The second step of the sampling algorithm uses the function SAMPLE, which returns a sample from P G (t|w, \u03b8) given the PCFG (G, \u03b8) and the inside table p A,i,k . SAMPLE takes as arguments a nonterminal A \u2208 N and a pair of string positions 0 \u2264 i < k \u2264 n and returns a tree drawn from P G A (t|w i,k , \u03b8). It functions in a top-down fashion, selecting the production A \u2192 B C to expand the A, and then recursively calling itself to expand B and C respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "function SAMPLE(A, i, k) : if k \u2212 i = 1 then return TREE(A, w k ) (j, B, C) = MULTI(A, i, k) return TREE(A, SAMPLE(B, i, j), SAMPLE(C, j, k))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In this pseudo-code, TREE is a function that constructs unary or binary tree nodes respectively, and MULTI is a function that produces samples from a multinomial distribution over the possible \"split\" positions j and nonterminal children B and C, where:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "P(j, B, C) = \u03b8 A\u2192B C P G B (w i,j |\u03b8) P G C (w j,k |\u03b8) P G A (w i,k |\u03b8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "5 A Hastings sampler for P(t|w, \u03b1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The Gibbs sampler described in Section 3 has the disadvantage that each sample of \u03b8 requires reparsing the training corpus w. In this section, we describe a component-wise Hastings algorithm for sampling directly from P(t|w, \u03b1), marginalizing over the production probabilities \u03b8.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Transitions between states are produced by sampling parses t i from P(t i |w i , t \u2212i , \u03b1) for each string w i in turn, where t \u2212i = (t 1 , . . . , t i\u22121 , t i+1 , . . . , t n ) is the current set of parses for w \u2212i = (w 1 , . . . , w i\u22121 , w i+1 , . . . , w n ). Marginalizing over \u03b8 effectively means that the production probabilities are updated after each sentence is parsed, so it is reasonable to expect that this algorithm will converge faster than the Gibbs sampler described earlier. While the sampler does not explicitly provide samples of \u03b8, the results outlined in Sections 2.3 and 3 can be used to sample the posterior distribution over \u03b8 for each sample of t if required.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Let P D (\u03b8|\u03b1) be a Dirichlet product prior, and let \u2206 be the probability simplex for \u03b8. Then by integrating over the posterior Dirichlet distributions we have:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P(t|\u03b1) = \u2206 P G (t|\u03b8)P D (\u03b8|\u03b1)d\u03b8 = A\u2208N C(\u03b1 A + f A (t)) C(\u03b1 A )", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "where C was defined in Equation 2. Because we are marginalizing over \u03b8, the trees t i become dependent upon one another. Intuitively, this is because w i may provide information about \u03b8 that influences how some other string w j should be parsed. We can use Equation 3 to compute the conditional probability P(t i |t \u2212i , \u03b1) as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "P(t i |t \u2212i , \u03b1) = P(t|\u03b1) P(t \u2212i |\u03b1) = A\u2208N C(\u03b1 A + f A (t)) C(\u03b1 A + f A (t \u2212i ))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Now, if we could sample from", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "P(t i |w i , t \u2212i , \u03b1) = P(w i |t i )P(t i |t \u2212i , \u03b1) P(w i |t \u2212i , \u03b1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "we could construct a Gibbs sampler whose states were the parse trees t. Unfortunately, we don't even know if there is an efficient algorithm for calculating P(w i |t \u2212i , \u03b1), let alone an efficient sampling algorithm for this distribution. Fortunately, this difficulty is not fatal. A Hastings sampler for a probability distribution \u03c0(s) is an MCMC algorithm that makes use of a proposal distribution Q(s \u2032 |s) from which it draws samples, and uses an acceptance/rejection scheme to define a transition kernel with the desired distribution \u03c0(s). Specifically, given the current state s, a sample s \u2032 = s drawn from Q(s \u2032 |s) is accepted as the next state with probability", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "A(s, s \u2032 ) = min 1, \u03c0(s \u2032 )Q(s|s \u2032 ) \u03c0(s)Q(s \u2032 |s)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "and with probability 1 \u2212 A(s, s \u2032 ) the proposal is rejected and the next state is the current state s. We use a component-wise proposal distribution, generating new proposed values for t i , where i is chosen at random. Our proposal distribution is the posterior distribution over parse trees generated by the PCFG with grammar G and production probabilities \u03b8 \u2032 , where \u03b8 \u2032 is chosen based on the current t \u2212i as described below. Each step of our Hastings sampler is as follows. First, we compute \u03b8 \u2032 from t \u2212i as described below. Then we sample t \u2032 i from P(t i |w i , \u03b8 \u2032 ) using the algorithm described in Section 4. Finally, we accept the proposal t \u2032 i given the old parse t i for w i with probability:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "A(t i , t \u2032 i ) = min 1, P(t \u2032 i |w i , t \u2212i , \u03b1)P(t i |w i , \u03b8 \u2032 ) P(t i |w i , t \u2212i , \u03b1)P(t \u2032 i |w i , \u03b8 \u2032 ) = min 1, P(t \u2032 i |t \u2212i , \u03b1)P(t i |w i , \u03b8 \u2032 ) P(t i |t \u2212i , \u03b1)P(t \u2032 i |w i , \u03b8 \u2032 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The key advantage of the Hastings sampler over the Gibbs sampler here is that because the acceptance probability is a ratio of probabilities, the difficult to compute P(w i |t \u2212i , \u03b1) is a common factor of both the numerator and denominator, and hence is not required. The P (w i |t i ) term also disappears, being 1 for both the numerator and the denominator since our proposal distribution can only generate trees for which w i is the yield. All that remains is to specify the production probabilities \u03b8 \u2032 of the proposal distribution P(t \u2032 i |w i , \u03b8 \u2032 ). While the acceptance rule used in the Hastings algorithm ensures that it produces samples from P(t i |w i , t \u2212i , \u03b1) with any proposal grammar \u03b8 \u2032 in which all productions have nonzero probability, the algorithm is more efficient (i.e., fewer proposals are rejected) if the proposal distribution is close to the distribution to be sampled.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Given the observations above about the correspondence between terms in P(t i |t \u2212i , \u03b1) and the relative frequency of the corresponding productions in t \u2212i , we set \u03b8 \u2032 to the expected value E[\u03b8|t \u2212i , \u03b1] of \u03b8 given t \u2212i and \u03b1 as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u03b8 \u2032 r = f r (t \u2212i ) + \u03b1 r r \u2032 \u2208R A f r \u2032 (t \u2212i ) + \u03b1 r \u2032", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficiently sampling from P(t|w, \u03b8)", |
| "sec_num": "4" |
| }, |
| { |
| "text": "As stated in the introduction, the primary contribution of this paper is introducing MCMC methods for Bayesian inference to computational linguistics. Bayesian inference using MCMC is a technique of generic utility, much like Expectation-Maximization and other general inference techniques, and we believe that it belongs in every computational linguist's toolbox alongside these other techniques. Inferring a PCFG to describe the syntactic structure of a natural language is an obvious application of grammar inference techniques, and it is well-known that PCFG inference using maximum-likelihood techniques such as the Inside-Outside (IO) algorithm, a dynamic programming Expectation-Maximization (EM) algorithm for PCFGs, performs extremely poorly on such tasks. We have applied the Bayesian MCMC methods described here to such problems and obtain results very similar to those produced using IO. We believe that the primary reason why both IO and the Bayesian methods perform so poorly on this task is that simple PCFGs are not accurate models of English syntactic structure. We know that PCFGs \u03b1 = (0.1, 1.0) \u03b1 = (0.5, 1.0) \u03b1 = (1.0, 1.0) that represent only major phrasal categories ignore a wide variety of lexical and syntactic dependencies in natural language. State-of-the-art systems for unsupervised syntactic structure induction system uses models that are very different to these kinds of PCFGs (Klein and Manning, 2004; Smith and Eisner, 2006) . 1 Our goal in this section is modest: we aim merely to provide an illustrative example of Bayesian inference using MCMC. As Figure 2 shows, when the Dirichlet prior parameter \u03b1 r approaches 0 the prior probability P D (\u03b8 r |\u03b1) becomes increasingly concentrated around 0. This ability to bias the sampler toward sparse grammars (i.e., grammars in which many productions have probabilities close to 0) is useful when we attempt to identify relevant productions from a much larger set of possible productions via parameter estimation.", |
| "cite_spans": [ |
| { |
| "start": 1409, |
| "end": 1434, |
| "text": "(Klein and Manning, 2004;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1435, |
| "end": 1458, |
| "text": "Smith and Eisner, 2006)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1461, |
| "end": 1462, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1585, |
| "end": 1593, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Inferring sparse grammars", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The Bantu language Sesotho is a richly agglutinative language, in which verbs consist of a sequence of morphemes, including optional Subject Markers (SM), Tense (T), Object Markers (OM), Mood (M) and derivational affixes as well as the obligatory Verb stem (V), as shown in the following example:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inferring sparse grammars", |
| "sec_num": "6" |
| }, |
| { |
| "text": "re SM -a T -di OM -bon V -a M \"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inferring sparse grammars", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We see them\"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inferring sparse grammars", |
| "sec_num": "6" |
| }, |
| { |
| "text": "1 It is easy to demonstrate that the poor quality of the PCFG models is the cause of these problems rather than search or other algorithmic issues. If one initializes either the IO or Bayesian estimation procedures with treebank parses and then runs the procedure using the yields alone, the accuracy of the parses uniformly decreases while the (posterior) likelihood uniformly increases with each iteration, demonstrating that improving the (posterior) likelihood of such models does not improve parse accuracy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inferring sparse grammars", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We used an implementation of the Hastings sampler described in Section 5 to infer morphological parses t for a corpus w of 2,283 unsegmented Sesotho verb types extracted from the Sesotho corpus available from CHILDES (MacWhinney and Snow, 1985; Demuth, 1992) . We chose this corpus because the words have been morphologically segmented manually, making it possible for us to evaluate the morphological parses produced by our system. We constructed a CFG G containing the following productions", |
| "cite_spans": [ |
| { |
| "start": 217, |
| "end": 244, |
| "text": "(MacWhinney and Snow, 1985;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 245, |
| "end": 258, |
| "text": "Demuth, 1992)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inferring sparse grammars", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Word", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inferring sparse grammars", |
| "sec_num": "6" |
| }, |
| { |
| "text": "\u2192 V Word \u2192 V M Word \u2192 SM V M Word \u2192 SM T V M Word \u2192 SM T OM V M", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inferring sparse grammars", |
| "sec_num": "6" |
| }, |
| { |
| "text": "together with productions expanding the preterminals SM, T, OM, V and M to each of the 16,350 distinct substrings occuring anywhere in the corpus, producting a grammar with 81,755 productions in all. In effect, G encodes the basic morphological structure of the Sesotho verb (ignoring factors such as derivation morphology and irregular forms), but provides no information about the phonological identity of the morphemes. Note that G actually generates a finite language. However, G parameterizes the probability distribution over the strings it generates in a manner that would be difficult to succintly characterize except in terms of the productions given above. Moreover, with approximately 20 times more productions than training strings, each string is highly ambiguous and estimation is highly underconstrained, so it provides an excellent test-bed for sparse priors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inferring sparse grammars", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We estimated the morphological parses t in two ways. First, we ran the IO algorithm initialized with a uniform initial estimate \u03b8 0 for \u03b8 to produce an estimate of the MLE\u03b8, and then computed the Viterbi parsest of the training corpus w with respect to the PCFG (G,\u03b8). Second, we ran the Hastings sampler initialized with trees sampled from (G, \u03b8 0 ) with several different values for the parameters of the prior. We experimented with a number of techniques for speeding convergence of both the IO and Hastings algorithms, and two of these were particularly effective on this problem. Annealing, i.e., using P(t|w) 1/\u03c4 in place of P(t|w) where \u03c4 is a \"temperature\" parameter starting around 5 and slowly ad-justed toward 1, sped the convergence of both algorithms. We ran both algorithms for several thousand iterations over the corpus, and both seemed to converge fairly quickly once \u03c4 was set to 1. \"Jittering\" the initial estimate of \u03b8 used in the IO algorithm also sped its convergence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inferring sparse grammars", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The IO algorithm converges to a solution where \u03b8 Word \u2192 V = 1, and every string w \u2208 w is analysed as a single morpheme V. (In fact, in this grammar P(w i |\u03b8) is the empirical probability of w i , and it is easy to prove that this \u03b8 is the MLE).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inferring sparse grammars", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The samples t produced by the Hastings algorithm depend on the parameters of the Dirichlet prior. We set \u03b1 r to a single value \u03b1 for all productions r. We found that for \u03b1 > 10 \u22122 the samples produced by the Hastings algorithm were the same trivial analyses as those produced by the IO algorithm, but as \u03b1 was reduced below this t began to exhibit nontrivial structure. We evaluated the quality of the segmentations in the morphological analyses t in terms of unlabeled precision, recall, f-score and exact match (the fraction of words correctly segmented into morphemes; we ignored morpheme labels because the manual morphological analyses contain many morpheme labels that we did not include in G). Figure 3 contains a plot of how these quantities vary with \u03b1; obtaining an f-score of 0.75 and an exact word match accuracy of 0.54 at \u03b1 = 10 \u22125 (the corresponding values for the MLE\u03b8 are both 0). Note that we obtained good results as \u03b1 was varied over several orders of magnitude, so the actual value of \u03b1 is not critical. Thus in this application the ability to prefer sparse grammars enables us to find linguistically meaningful analyses. This ability to find linguistically meaningful structure is relatively rare in our experience with unsupervised PCFG induction.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 701, |
| "end": 709, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Inferring sparse grammars", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We also experimented with a version of IO modified to perform Bayesian MAP estimation, where the Maximization step of the IO procedure is replaced with Bayesian inference using a Dirichlet prior, i.e., where the rule probabilities \u03b8 (k) at iteration k are estimated using:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inferring sparse grammars", |
| "sec_num": "6" |
| }, |
| { |
| "text": "\u03b8 (k) r \u221d max(0, E[f r |w, \u03b8 (k\u22121) ] + \u03b1 \u2212 1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inferring sparse grammars", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Clearly such an approach is very closely related to the Bayesian procedures presented in this article, and in some circumstances this may be a useful estimator. However, in our experiments with the Sesotho data above we found that for the small values of \u03b1 necessary to obtain a sparse solution,the expected rule count E[f r ] for many rules r was less than 1 \u2212 \u03b1. Thus on the next iteration \u03b8 r = 0, resulting in there being no parse whatsoever for many of the strings in the training data. Variational Bayesian techniques offer a systematic way of dealing with these problems, but we leave this for further work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inferring sparse grammars", |
| "sec_num": "6" |
| }, |
| { |
| "text": "This paper has described basic algorithms for performing Bayesian inference over PCFGs given terminal strings. We presented two Markov chain Monte Carlo algorithms (a Gibbs and a Hastings sampling algorithm) for sampling from the posterior distribution over parse trees given a corpus of their yields and a Dirichlet product prior over the production probabilities. As a component of these algorithms we described an efficient dynamic programming algorithm for sampling trees from a PCFG which is useful in its own right. We used these sampling algorithms to infer morphological analyses of Sesotho verbs given their strings (a task on which the standard Maximum Likelihood estimator returns a trivial and linguistically uninteresting solution), achieving 0.75 unlabeled morpheme f-score and 0.54 exact word match accuracy. Thus this is one of the few cases we are aware of in which a PCFG estimation procedure returns linguistically meaningful structure. We attribute this to the ability of the Bayesian prior to prefer sparse grammars. We expect that these algorithms will be of interest to the computational linguistics community both because a Bayesian approach to PCFG estimation is more flexible than the Maximum Likelihood methods that currently dominate the field (c.f., the use of a prior as a bias towards sparse solutions), and because these techniques provide essential building blocks for more complex models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The Cross-Linguistic Study of Language Acquisition", |
| "authors": [ |
| { |
| "first": "Katherine", |
| "middle": [], |
| "last": "Demuth", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Lawrence Erlbaum Associates", |
| "volume": "3", |
| "issue": "", |
| "pages": "557--638", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Katherine Demuth. 1992. Acquisition of Sesotho. In Dan Slobin, editor, The Cross-Linguistic Study of Language Ac- quisition, volume 3, pages 557-638. Lawrence Erlbaum As- sociates, Hillsdale, N.J.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "RNA secondary structure prediction by centroids in a Boltzmann weighted ensemble", |
| "authors": [ |
| { |
| "first": "Ye", |
| "middle": [], |
| "last": "Ding", |
| "suffix": "" |
| }, |
| { |
| "first": "Chi", |
| "middle": [ |
| "Yu" |
| ], |
| "last": "Chan", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles", |
| "middle": [ |
| "E" |
| ], |
| "last": "Lawrence", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "RNA", |
| "volume": "11", |
| "issue": "", |
| "pages": "1157--1166", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ye Ding, Chi Yu Chan, and Charles E. Lawrence. 2005. RNA secondary structure prediction by centroids in a Boltzmann weighted ensemble. RNA, 11:1157-1166.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Solving the problem of cascading errors: Approximate Bayesian inference for linguistic annotation pipelines", |
| "authors": [ |
| { |
| "first": "Jenny", |
| "middle": [ |
| "Rose" |
| ], |
| "last": "Finkel", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "618--626", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jenny Rose Finkel, Christopher D. Manning, and Andrew Y. Ng. 2006. Solving the problem of cascading errors: Approximate Bayesian inference for linguistic annotation pipelines. In Proceedings of the 2006 Conference on Empir- ical Methods in Natural Language Processing, pages 618- 626, Sydney, Australia. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images", |
| "authors": [ |
| { |
| "first": "Stuart", |
| "middle": [], |
| "last": "Geman", |
| "suffix": "" |
| }, |
| { |
| "first": "Donald", |
| "middle": [], |
| "last": "Geman", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", |
| "volume": "6", |
| "issue": "", |
| "pages": "721--741", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stuart Geman and Donald Geman. 1984. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 6:721-741.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Random number generation and Monte Carlo methods", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [ |
| "E" |
| ], |
| "last": "Gentle", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James E. Gentle. 2003. Random number generation and Monte Carlo methods. Springer, New York, 2nd edition.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Corpus-based induction of syntactic structure: Models of dependency and constituency", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "478--485", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Klein and Chris Manning. 2004. Corpus-based induc- tion of syntactic structure: Models of dependency and con- stituency. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, pages 478-485.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "The estimation of Stochastic Context-Free Grammars using the Inside-Outside algorithm", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Lari", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "J" |
| ], |
| "last": "Young", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Computer Speech and Language", |
| "volume": "4", |
| "issue": "", |
| "pages": "35--56", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Lari and S.J. Young. 1990. The estimation of Stochastic Context-Free Grammars using the Inside-Outside algorithm. Computer Speech and Language, 4(35-56).", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "The child language data exchange system", |
| "authors": [ |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Macwhinney", |
| "suffix": "" |
| }, |
| { |
| "first": "Catherine", |
| "middle": [], |
| "last": "Snow", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "Journal of Child Language", |
| "volume": "12", |
| "issue": "", |
| "pages": "271--296", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brian MacWhinney and Catherine Snow. 1985. The child lan- guage data exchange system. Journal of Child Language, 12:271-296.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Annealing structural bias in multilingual weighted grammar induction", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Noah", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "569--576", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noah A. Smith and Jason Eisner. 2006. Annealing structural bias in multilingual weighted grammar induction. In Pro- ceedings of the 21st International Conference on Computa- tional Linguistics and 44th Annual Meeting of the Associa- tion for Computational Linguistics, pages 569-576, Sydney, Australia. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "A Bayes net representation of dependencies among the variables in a PCFG.", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "text": "A Dirichlet prior \u03b1 on a binomial parameter \u03b8 1 . As \u03b1 1 \u2192 0, P(\u03b8 1 |\u03b1) is increasingly concentrated around 0.", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "text": "Accuracy of morphological segmentations of Sesotho verbs proposed by the Hastings algorithms as a function of Dirichlet prior parameter \u03b1. F-score, precision and recall are unlabeled morpheme scores, while Exact is the fraction of words correctly segmented.", |
| "type_str": "figure", |
| "num": null |
| } |
| } |
| } |
| } |