ACL-OCL / Base_JSON /prefixP /json /P05 /P05-1043.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P05-1043",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:37:44.281600Z"
},
"title": "Learning Stochastic OT Grammars: A Bayesian approach using Data Augmentation and Gibbs Sampling",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"postCode": "90095",
"settlement": "Los Angeles Los Angeles",
"region": "CA"
}
},
"email": "yinglin@ucla.edu"
},
{
"first": "Bruce",
"middle": [],
"last": "Hayes",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"postCode": "90095",
"settlement": "Los Angeles Los Angeles",
"region": "CA"
}
},
"email": ""
},
{
"first": "Ed",
"middle": [],
"last": "Stabler",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"postCode": "90095",
"settlement": "Los Angeles Los Angeles",
"region": "CA"
}
},
"email": ""
},
{
"first": "Yingnian",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"postCode": "90095",
"settlement": "Los Angeles Los Angeles",
"region": "CA"
}
},
"email": ""
},
{
"first": "Colin",
"middle": [],
"last": "Wilson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"postCode": "90095",
"settlement": "Los Angeles Los Angeles",
"region": "CA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Stochastic Optimality Theory (Boersma, 1997) is a widely-used model in linguistics that did not have a theoretically sound learning method previously. In this paper, a Markov chain Monte-Carlo method is proposed for learning Stochastic OT Grammars. Following a Bayesian framework, the goal is finding the posterior distribution of the grammar given the relative frequencies of input-output pairs. The Data Augmentation algorithm allows one to simulate a joint posterior distribution by iterating two conditional sampling steps. This Gibbs sampler constructs a Markov chain that converges to the joint distribution, and the target posterior can be derived as its marginal distribution.",
"pdf_parse": {
"paper_id": "P05-1043",
"_pdf_hash": "",
"abstract": [
{
"text": "Stochastic Optimality Theory (Boersma, 1997) is a widely-used model in linguistics that did not have a theoretically sound learning method previously. In this paper, a Markov chain Monte-Carlo method is proposed for learning Stochastic OT Grammars. Following a Bayesian framework, the goal is finding the posterior distribution of the grammar given the relative frequencies of input-output pairs. The Data Augmentation algorithm allows one to simulate a joint posterior distribution by iterating two conditional sampling steps. This Gibbs sampler constructs a Markov chain that converges to the joint distribution, and the target posterior can be derived as its marginal distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Optimality Theory (Prince and Smolensky, 1993) is a linguistic theory that dominates the field of phonology, and some areas of morphology and syntax. The standard version of OT contains the following assumptions:",
"cite_spans": [
{
"start": 18,
"end": 46,
"text": "(Prince and Smolensky, 1993)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 A grammar is a set of ordered constraints ({C i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "i = 1, \u2022 \u2022 \u2022 , N }, >);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Each constraint C i is a function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u03a3 * \u2192 {0, 1, \u2022 \u2022 \u2022 },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "where \u03a3 * is the set of strings in the language;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Each underlying form u corresponds to a set of candidates GEN (u) . To obtain the unique surface form, the candidate set is successively filtered according to the order of constraints, so that only the most harmonic candidates remain after each filtering. If only 1 candidate is left in the candidate set, it is chosen as the optimal output.",
"cite_spans": [
{
"start": 60,
"end": 67,
"text": "GEN (u)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The popularity of OT is partly due to learning algorithms that induce constraint ranking from data. However, most of such algorithms cannot be applied to noisy learning data. Stochastic Optimality Theory (Boersma, 1997 ) is a variant of Optimality Theory that tries to quantitatively predict linguistic variation. As a popular model among linguists that are more engaged with empirical data than with formalisms, Stochastic OT has been used in a large body of linguistics literature. In Stochastic OT, constraints are regarded as independent normal distributions with unknown means and fixed variance. As a result, the stochastic constraint hierarchy generates systematic linguistic variation. For example, consider a grammar with 3 constraints, C 1 \u223c N (\u00b5 1 , \u03c3 2 ), C 2 \u223c N (\u00b5 2 , \u03c3 2 ), C 3 \u223c N (\u00b5 3 , \u03c3 2 ), and 2 competing candidates for a given input x: p(.) C 1 C 2 C 3 x \u223c y 1 .77 0 0 1 x \u223c y 2 .23 1 1 0 The probabilities p(.) are obtained by repeatedly sampling the 3 normal distributions, generating the winning candidate according to the ordering of constraints, and counting the relative frequencies in the outcome. As a result, the grammar will assign nonzero probabilities to a given set of outputs, as shown above.",
"cite_spans": [
{
"start": 204,
"end": 218,
"text": "(Boersma, 1997",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The learning problem of Stochastic OT involves fitting a grammar G \u2208 R N to a set of candidates with frequency counts in a corpus. For example, if the learning data is the above table, we need to find an estimate of G = (\u00b5 1 , \u00b5 2 , \u00b5 3 ) 1 so that the following ordering relations hold with certain probabilities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "max{C 1 , C 2 } > C 3 ; with probability .77 max{C 1 , C 2 } < C 3 ; with probability .23",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The current method for fitting Stochastic OT models, used by many linguists, is the Gradual Learning Algorithm (GLA) (Boersma and Hayes, 2001) . GLA looks for the correct ranking values by using the following heuristic, which resembles gradient descent. First, an input-output pair is sampled from the data; second, an ordering of the constraints is sampled from the grammar and used to generate an output; and finally, the means of the constraints are updated so as to minimize the error. The updating is done by adding or subtracting a \"plasticity\" value that goes to zero over time. The intuition behind GLA is that it does \"frequency matching\", i.e. looking for a better match between the output frequencies of the grammar and those in the data. As it turns out, GLA does not work in all cases 2 , and its lack of formal foundations has been questioned by a number of researchers (Keller and Asudeh, 2002; Goldwater and Johnson, 2003) . However, considering the broad range of linguistic data that has been analyzed with Stochastic OT, it seems unadvisable to reject this model because of the absence of theoretically sound learning methods. Rather, a general solution is needed to evaluate Stochastic OT as a model for linguistic variation. In this paper, I introduce an algorithm for learning Stochastic OT grammars using Markov chain Monte-Carlo methods. Within a Bayesian frame-work, the learning problem is formalized as finding the posterior distribution of ranking values (G) given the information on constraint interaction based on input-output pairs (D). The posterior contains all the information needed for linguists' use: for example, if there is a grammar that will generate the exact frequencies as in the data, such a grammar will appear as a mode of the posterior.",
"cite_spans": [
{
"start": 117,
"end": 142,
"text": "(Boersma and Hayes, 2001)",
"ref_id": "BIBREF3"
},
{
"start": 884,
"end": 909,
"text": "(Keller and Asudeh, 2002;",
"ref_id": "BIBREF9"
},
{
"start": 910,
"end": 938,
"text": "Goldwater and Johnson, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In computation, the posterior distribution is simulated with MCMC methods because the likelihood function has a complex form, thus making a maximum-likelihood approach hard to perform. Such problems are avoided by using the Data Augmentation algorithm (Tanner and Wong, 1987) to make computation feasible: to simulate the posterior distribution G \u223c p(G|D), we augment the parameter space and simulate a joint distribution (G, Y ) \u223c p(G, Y |D). It turns out that by setting Y as the value of constraints that observe the desired ordering, simulating from p(G, Y |D) can be achieved with a Gibbs sampler, which constructs a Markov chain that converges to the joint posterior distribution (Geman and Geman, 1984; Gelfand and Smith, 1990 ). I will also discuss some issues related to efficiency in implementation.",
"cite_spans": [
{
"start": 252,
"end": 275,
"text": "(Tanner and Wong, 1987)",
"ref_id": "BIBREF14"
},
{
"start": 686,
"end": 709,
"text": "(Geman and Geman, 1984;",
"ref_id": "BIBREF6"
},
{
"start": 710,
"end": 733,
"text": "Gelfand and Smith, 1990",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Naturally, one may consider \"frequency matching\" as estimating the grammar based on the maximumlikelihood criterion. Given a set of constraints and candidates, the data may be compiled in the form of (1), on which the likelihood calculation is based. As an example, given the grammar and data set in Table 1 , the likelihood of d=\"max{C1, C2} > C3\" can be written as",
"cite_spans": [],
"ref_spans": [
{
"start": 300,
"end": 308,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "The difficulty of a maximum-likelihood approach",
"sec_num": "2"
},
{
"text": "P (d|\u00b5 1 , \u00b5 2 , \u00b5 3 )= 1 \u2212 0 \u2212\u221e 0 \u2212\u221e 1 2\u03c0\u03c3 2 exp \u2212 fxy\u2022\u03a3\u2022 f T xy 2 dx dy where f xy = (x \u2212 \u00b5 1 + \u00b5 3 , y \u2212 \u00b5 2 + \u00b5 3 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The difficulty of a maximum-likelihood approach",
"sec_num": "2"
},
{
"text": ", and \u03a3 is the identity covariance matrix. The integral sign follows from the fact that both C 1 \u2212 C 2 , C 2 \u2212 C 3 are normal, since each constraint is independently normally distributed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The difficulty of a maximum-likelihood approach",
"sec_num": "2"
},
{
"text": "If we treat each data as independently generated by the grammar, then the likelihood will be a product of such integrals (multiple integrals if many constraints are interacting). One may attempt to maximize such a likelihood function using numerical methods 3 , yet it appears to be desirable to avoid likelihood calculations altogether.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The difficulty of a maximum-likelihood approach",
"sec_num": "2"
},
{
"text": "The Bayesian approach tries to explore p(G|D), the posterior distribution. Notice if we take the usual approach by using the relationship p(G|D) \u221d p(D|G) \u2022 p(G), we will encounter the same problem as in Section 2. Therefore we need a feasible way of sampling p(G|D) without having to derive the closed-form of p(D|G).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The missing data scheme for learning Stochastic OT grammars",
"sec_num": "3"
},
{
"text": "The key idea here is the so-called \"missing data\" scheme in Bayesian statistics: in a complex modelfitting problem, the computation can sometimes be greatly simplified if we treat part of the unknown parameters as data and fit the model in successive stages. To apply this idea, one needs to observe that Stochastic OT grammars are learned from ordinal data, as seen in (1). In other words, only one aspect of the structure generated by those normal distributions -the ordering of constraints -is used to generate outputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The missing data scheme for learning Stochastic OT grammars",
"sec_num": "3"
},
{
"text": "This observation points to the possibility of treating the sample values of constraints y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The missing data scheme for learning Stochastic OT grammars",
"sec_num": "3"
},
{
"text": "= (y 1 , y 2 , \u2022 \u2022 \u2022 , y N )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The missing data scheme for learning Stochastic OT grammars",
"sec_num": "3"
},
{
"text": "that satisfy the ordering relations as missing data. It is appropriate to refer to them as \"missing\" because a language learner obviously cannot observe real numbers from the constraints, which are postulated by linguistic theory. When the observed data are augmented with missing data and become a complete data model, computation becomes significantly simpler. This type of idea is officially known as Data Augmentation (Tanner and Wong, 1987) . More specifically, we also make the following intuitive observations:",
"cite_spans": [
{
"start": 422,
"end": 445,
"text": "(Tanner and Wong, 1987)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The missing data scheme for learning Stochastic OT grammars",
"sec_num": "3"
},
{
"text": "\u2022 The complete data model consists of 3 random variables: the observed ordering relations D, the grammar G, and the missing samples of constraint values Y that generate the ordering D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The missing data scheme for learning Stochastic OT grammars",
"sec_num": "3"
},
{
"text": "\u2022 G and Y are interdependent: (Geman and Geman, 1984) . In the same order as described in Section 3, the two conditional sampling steps are implemented as follows:",
"cite_spans": [
{
"start": 30,
"end": 53,
"text": "(Geman and Geman, 1984)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The missing data scheme for learning Stochastic OT grammars",
"sec_num": "3"
},
{
"text": "-For each fixed d,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The missing data scheme for learning Stochastic OT grammars",
"sec_num": "3"
},
{
"text": "1. Sample an ordering relation d according to the prior p(D), which is simply normalized frequency counts; sample a vector of constraint values",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The missing data scheme for learning Stochastic OT grammars",
"sec_num": "3"
},
{
"text": "y = {y 1 , \u2022 \u2022 \u2022 , y N } from the nor- mal distributions N (\u00b5 (t) 1 , \u03c3 2 ), \u2022 \u2022 \u2022 , N (\u00b5 (t) N , \u03c3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The missing data scheme for learning Stochastic OT grammars",
"sec_num": "3"
},
{
"text": "2 ) such that y observes the ordering in d;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The missing data scheme for learning Stochastic OT grammars",
"sec_num": "3"
},
{
"text": "Step 1 and obtain M samples of missing data:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "2."
},
{
"text": "y 1 , \u2022 \u2022 \u2022 , y M ; sample \u00b5 (t+1) i from N ( j y j i /M, \u03c3 2 /M ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "2."
},
{
"text": "The",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "2."
},
{
"text": "grammar G = (\u00b5 1 , \u2022 \u2022 \u2022 , \u00b5 N )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "2."
},
{
"text": ", and the superscript (t) represents a sample of G in iteration t. As explained in 3, Step 1 samples missing data from p(Y |G, D), and Step 2 is equivalent to sampling from p(G|Y, D), by the conditional independence of G and D given Y . The normal posterior distribution",
"cite_spans": [
{
"start": 22,
"end": 25,
"text": "(t)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "2."
},
{
"text": "N ( j y j i /M, \u03c3 2 /M ) is derived by us- ing p(G|Y ) \u221d p(Y |G)p(G), where p(Y |G)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "2."
},
{
"text": "is normal, and p(G) \u223c N (\u00b5 0 , \u03c3 0 ) is chosen to be an noninformative prior with \u03c3 0 \u2192 \u221e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "2."
},
{
"text": "M (the number of missing data) is not a crucial parameter. In our experiments, M is set to the total number of observed forms 4 . Although it may seem that \u03c3 2 /M is small for a large M and does not play a significant role in the sampling of \u00b5 (t+1) i , the variance of the sampling distribution is a necessary ingredient of the Gibbs sampler 5 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "2."
},
{
"text": "Under fairly general conditions (Geman and Geman, 1984) , the Gibbs sampler iterates these two steps until it converges to a unique stationary distribution. In practice, convergence can be monitored by calculating cross-sample statistics from multiple Markov chains with different starting points (Gelman and Rubin, 1992) . After the simulation is stopped at convergence, we will have obtained a perfect sample of p(G, Y |D). These samples can be used to derive our target distribution p(G|D) by simply keeping all the G components, since p(G|D) is a marginal distribution of p(G, Y |D). Thus, the sampling-based approach gives us the advantage of doing inference without performing any integration.",
"cite_spans": [
{
"start": 32,
"end": 55,
"text": "(Geman and Geman, 1984)",
"ref_id": "BIBREF6"
},
{
"start": 297,
"end": 321,
"text": "(Gelman and Rubin, 1992)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Repeat",
"sec_num": "2."
},
{
"text": "In this section, I will sketch some key steps in the implementation of the Gibbs sampler. Particular attention is paid to sampling p(Y |G, D), since a direct implementation may require an unrealistic running time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational issues in implementation",
"sec_num": "5"
},
{
"text": "The prior probability p(D) determines the number of samples (missing data) that are drawn under each ordering relation. The following example illustrates how the ordering D and p(D) are calculated from data collected in a linguistic analysis. Consider a data set that contains 2 inputs and a few outputs, each associated with an observed frequency in the lexicon: The three ordering relations (corresponding to 3 attested outputs) and p(D) are computed as follows: 5 As required by the proof in (Geman and Geman, 1984) . Here each ordering relation has several conjuncts, and the number of conjuncts is equal to the number of competing candidates for each given input. These conjuncts need to hold simultaneously because each winning candidate needs to be more harmonic than all other competing candidates. The probabilities p(D) are obtained by normalizing the frequencies of the surface forms in the original data. This will have the consequence of placing more weight on lexical items that occur frequently in the corpus.",
"cite_spans": [
{
"start": 465,
"end": 466,
"text": "5",
"ref_id": null
},
{
"start": 495,
"end": 518,
"text": "(Geman and Geman, 1984)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing p(D) from linguistic data",
"sec_num": "5.1"
},
{
"text": "C1 C2 C3 C4 C5 Freq. x 1 y 11 0 1 0 1 0 4 y 12 1 0 0 0 0 3 y 13 0 1 1 0 1 0 y 14 0 0 1 0 0 0 x 2 y 21 1 1 0 0 0 3 y 22 0 0 1 1 1 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing p(D) from linguistic data",
"sec_num": "5.1"
},
{
"text": "D p(D) \uf8f1 \uf8f2 \uf8f3 C1>max{C2, C4} max{C3, C5}>C4 C3>max{C2, C4} .4 \uf8f1 \uf8f2 \uf8f3 max{C2, C4}>C1 max{C2, C3, C5}>C1 C3>C1 .3 max{C3, C4, C5} > max{C1, C2} .3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ordering Relation",
"sec_num": null
},
{
"text": "A direct implementation p(Y |G, d) is straightforward: 1) first obtain N samples from N Gaussian distributions; 2) check each conjunct to see if the ordering relation is satisfied. If so, then keep the sample; if not, discard the sample and try again. However, this can be highly inefficient in many cases. For example, if m constraints appear in the ordering relation d and the sample is rejected, the N \u2212 m random numbers for constraints not appearing in d are also discarded. When d has several conjuncts, the chance of rejecting samples for irrelevant constraints is even greater.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling p(Y |G, D) under complex ordering relations",
"sec_num": "5.2"
},
{
"text": "In order to save the generated random numbers, the vector Y can be decomposed into its 1-dimensional components",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling p(Y |G, D) under complex ordering relations",
"sec_num": "5.2"
},
{
"text": "(Y 1 , Y 2 , \u2022 \u2022 \u2022 , Y N ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling p(Y |G, D) under complex ordering relations",
"sec_num": "5.2"
},
{
"text": "The problem then becomes sampling p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling p(Y |G, D) under complex ordering relations",
"sec_num": "5.2"
},
{
"text": "(Y 1 , \u2022 \u2022 \u2022 , Y N |G, D)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling p(Y |G, D) under complex ordering relations",
"sec_num": "5.2"
},
{
"text": ". Again, we may use conditional sampling to draw y i one at a time: we keep y j =i and d fixed 6 , and draw y i so that d holds for y. There are now two cases: if d holds regardless of y i , then any sample from N (\u00b5 (t) i , \u03c3 2 ) will do; otherwise, we will need to draw y i from a truncated normal distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling p(Y |G, D) under complex ordering relations",
"sec_num": "5.2"
},
{
"text": "To illustrate this idea, consider an example used earlier where d=\"max{c 1 , c 2 } > c 3 \", and the initial sample and parameters are (y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling p(Y |G, D) under complex ordering relations",
"sec_num": "5.2"
},
{
"text": "(0) 1 , y (0) 2 , y (0) 3 ) = (\u00b5 (0) 1 , \u00b5 (0) 2 , \u00b5 (0) 3 ) = (1, \u22121, 0). Sampling dist. Y 1 Y 2 Y 3 p(Y 1 |\u00b5 1 , Y 1 > y 3 ) 2.3799 -1.0000 0 p(Y 2 |\u00b5 2 ) 2.3799 -0.7591 0 p(Y 3 |\u00b5 3 , Y 3 < y 1 ) 2.3799 -0.7591 -1.0328 p(Y 1 |\u00b5 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling p(Y |G, D) under complex ordering relations",
"sec_num": "5.2"
},
{
"text": "-1.4823 -0.7591 -1.0328 p(Y 2 |\u00b5 2 , Y 2 > y 3 ) -1.4823 2.1772 -1.0328 p(Y 3 |\u00b5 3 , Y 3 < y 2 ) -1.4823 2.1772 1.0107 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling p(Y |G, D) under complex ordering relations",
"sec_num": "5.2"
},
{
"text": "= p(Y 1 , Y 2 , Y 3 |\u00b5 1 , \u00b5 2 , \u00b5 3 , d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling p(Y |G, D) under complex ordering relations",
"sec_num": "5.2"
},
{
"text": "Notice that in each step, the sampling density is either just a normal, or a truncated normal distribution. This is because we only need to make sure that d will continue to hold for the next sample y (t+1) , which differs from y (t) by just 1 constraint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling p(Y |G, D) under complex ordering relations",
"sec_num": "5.2"
},
{
"text": "In our experiment, sampling from truncated normal distributions is realized by using the idea of rejection sampling: to sample from a truncated nor-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling p(Y |G, D) under complex ordering relations",
"sec_num": "5.2"
},
{
"text": "mal 7 \u03c0 c (x) = 1 Z(c) \u2022 N (\u00b5, \u03c3) \u2022 I {x>c} ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling p(Y |G, D) under complex ordering relations",
"sec_num": "5.2"
},
{
"text": "we first find an envelope density function g(x) that is easy to sample directly, such that \u03c0 c (x) is uniformly bounded by M \u2022 g(x) for some constant M that does not depend on x. It can be shown that once each sample x from g(x) is rejected with probability r(x) = 1 \u2212 \u03c0 c (x) M \u2022g(x) , the resulting histogram will provide a perfect sample for \u03c0 c (x). In the current work, the exponential distribution g(x) = \u03bb exp {\u2212\u03bbx} is used as the envelope, with the following choices for \u03bb and the rejection ratio r(x), which have been optimized to lower the rejection rate:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling p(Y |G, D) under complex ordering relations",
"sec_num": "5.2"
},
{
"text": "\u03bb = c + \u221a c + 4\u03c3 2 2\u03c3 2 r(x) = exp (x + c) 2 2 + \u03bb 0 (x + c) \u2212 \u03c3 2 \u03bb 2 0 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling p(Y |G, D) under complex ordering relations",
"sec_num": "5.2"
},
{
"text": "Putting these ideas together, the final version of Gibbs sampler is constructed by implementing Step 1 in Section 4 as a sequence of conditional sampling steps for p(Y i |Y j =i , d), and combining them with the sampling of p (G|Y, D) . Notice the order in which Y i is updated is fixed, which makes our implementation an instance of the systematic-scan Gibbs sampler (Liu, 2001 ). This implementation may be improved even further by utilizing the structure of the ordering relation d, and optimizing the order in which Y i is updated.",
"cite_spans": [
{
"start": 226,
"end": 234,
"text": "(G|Y, D)",
"ref_id": null
},
{
"start": 368,
"end": 378,
"text": "(Liu, 2001",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling p(Y |G, D) under complex ordering relations",
"sec_num": "5.2"
},
{
"text": "Identifiability is related to the uniqueness of solution in model fitting. Given N constraints, a grammar G \u2208 R N is not identifiable because G + C will have the same behavior as G for any constant",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model identifiability",
"sec_num": "5.3"
},
{
"text": "C = (c 0 , \u2022 \u2022 \u2022 , c 0 ). To remove translation invariance, in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model identifiability",
"sec_num": "5.3"
},
{
"text": "Step 2 the average ranking value is subtracted from G, such that i \u00b5 i = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model identifiability",
"sec_num": "5.3"
},
{
"text": "Another problem related to identifiability arises when the data contains the so-called \"categorical domination\", i.e., there may be data of the following form: c 1 > c 2 with probability 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model identifiability",
"sec_num": "5.3"
},
{
"text": "In theory, the mode of the posterior tends to infinity and the Gibbs sampler will not converge. Since having categorical dominance relations is a common practice in linguistics, we avoid this problem by truncating the posterior distribution 8 by I |\u00b5|<K , where K is chosen to be a positive number large enough to ensure that the model be identifiable. The role of truncation/renormalization may be seen as a strong prior that makes the model identifiable on a bounded set. A third problem related to identifiability occurs when the posterior has multiple modes, which suggests that multiple grammars may generate the same output frequencies. This situation is common when the grammar contains interactions between many constraints, and greedy algorithms like GLA tend to find one of the many solutions. In this case, one can either introduce extra ordering relations or use informative priors to sample p(G|Y ), so that the inference on the posterior can be done with a relatively small number of samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model identifiability",
"sec_num": "5.3"
},
{
"text": "Once the Gibbs sampler has converged to its stationary distribution, we can use the samples to make var-ious inferences on the posterior. In the experiments reported in this paper, we are primarily interested in the mode of the posterior marginal 9 p(\u00b5 i |D), where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior inference",
"sec_num": "5.4"
},
{
"text": "i = 1, \u2022 \u2022 \u2022 , N .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior inference",
"sec_num": "5.4"
},
{
"text": "In cases where the posterior marginal is symmetric and uni-modal, its mode can be estimated by the sample median.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior inference",
"sec_num": "5.4"
},
{
"text": "In real linguistic applications, the posterior marginal may be a skewed distribution, and many modes may appear in the histogram. In these cases, more sophisticated non-parametric methods, such as kernel density estimation, can be used to estimate the modes. To reduce the computation in identifying multiple modes, a mixture approximation (by EM algorithm or its relatives) may be necessary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Posterior inference",
"sec_num": "5.4"
},
{
"text": "The following Ilokano grammar and data set, used in (Boersma and Hayes, 2001) , illustrate a complex type of constraint interaction: the interaction between the three constraints: * COMPLEX-ONSET, ALIGN, and IDEN T BR ([long] ) cannot be factored into interactions between 2 constraints. For any given candidate to be optimal, the constraint that prefers such a candidate must simultaneously dominate the other two constraints. Hence it is not immediately clear whether there is a grammar that will assign equal probability to the 3 candidates. /HRED-bwaja/ p(.) * C-ONS AL I BR bu:.bwa.ja .33 1 0 1 bwaj.bwa.ja .33 2 0 0 bub.wa.ja .33 0 1 0 Since it does not address the problem of identifiability, the GLA does not always converge on this data set, and the returned grammar does not always fit the input frequencies exactly, depending on the choice of parameters 10 .",
"cite_spans": [
{
"start": 52,
"end": 77,
"text": "(Boersma and Hayes, 2001)",
"ref_id": "BIBREF3"
},
{
"start": 218,
"end": 225,
"text": "([long]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ilokano reduplication",
"sec_num": "6.1"
},
{
"text": "In comparison, the Gibbs sampler converges quickly 11 , regardless of the parameters. The result suggests the existence of a unique grammar that will assign equal probabilities to the 3 candidates. The posterior samples and histograms are displayed in Figure 1 . Using the median of the marginal posteriors, the estimated grammar generates an exact fit to the frequencies in the input data. ",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 260,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Ilokano reduplication",
"sec_num": "6.1"
},
{
"text": "The second experiment uses linguistic data on Spanish diminutives and the analysis proposed in (Arbisi-Kelm, 2002) . There are 3 base forms, each associated with 2 diminutive suffixes. In the results found by GLA, [marEsito] always has a lower frequency than [marsito] (See Table 7 ). This is not accidental. Instead it reveals a problematic use of heuristics in GLA 12 : since the constraint B is violated by [ubita] , it is always demoted whenever the underlying form /uba/ is encountered during learning. Therefore, even though the expected model assigns equal values to \u00b5 3 and \u00b5 4 (corresponding to D and B, respectively), \u00b5 3 is always less than \u00b5 4 , simply because there is more chance of penalizing D rather than B. This problem arises precisely because of the heuristic (i.e. demoting the constraint that prefers the wrong candidate) that GLA uses to find the target grammar.",
"cite_spans": [
{
"start": 95,
"end": 114,
"text": "(Arbisi-Kelm, 2002)",
"ref_id": "BIBREF1"
},
{
"start": 410,
"end": 417,
"text": "[ubita]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 274,
"end": 281,
"text": "Table 7",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Spanish diminutive suffixation",
"sec_num": "6.2"
},
{
"text": "The Gibbs sampler, on the other hand, does not depend on heuristic rules in its search. Since modes of the posterior p(\u00b5 3 |D) and p(\u00b5 4 |D) reside in negative infinity, the posterior is truncated by I \u00b5 i <K , with K = 6, based on the discussion in 5.3. Results of the Gibbs sampler and two runs of GLA 13 are reported in Table 7 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 323,
"end": 330,
"text": "Table 7",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Spanish diminutive suffixation",
"sec_num": "6.2"
},
{
"text": "Previously, problems with the GLA 14 have inspired other OT-like models of linguistic variation. One such proposal suggests using the more well-known Maximum Entropy model (Goldwater and Johnson, 2003) . In Max-Ent models, a grammar G is also parameterized by a real vector of weights w = (w 1 , \u2022 \u2022 \u2022 , w N ), but the conditional likelihood of an output y given an input x is given by:",
"cite_spans": [
{
"start": 172,
"end": 201,
"text": "(Goldwater and Johnson, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A comparison with Max-Ent models",
"sec_num": "7"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(y|x) = exp{ i w i f i (y, x)} z exp{ i w i f i (z, x)}",
"eq_num": "(2)"
}
],
"section": "A comparison with Max-Ent models",
"sec_num": "7"
},
{
"text": "where f i (y, x) is the violation each constraint assigns to the input-output pair (x, y). Clearly, Max-Ent is a rather different type of model from Stochastic OT, not only in the use of constraint ordering, but also in the objective function (conditional likelihood rather than likelihood/posterior). However, it may be of interest to compare these two types of models. Using the same data as in 6.2, results of fitting Max-Ent (using conjugate gradient descent) and Stochastic OT (using Gibbs sampler) are reported in It can be seen that the Max-Ent model, in the absence of a smoothing prior, fits the data perfectly by assigning positive weights to constraints B and D. A less exact fit (denoted by ME sm ) is obtained when the smoothing Gaussian prior is used with \u00b5 i = 0, \u03c3 2 i = 1. But as observed in 6.2, an exact fit is impossible to obtain using Stochastic OT, due to the difference in the way variation is generated by the models. Thus it may be seen that Max-Ent is a more powerful class of models than Stochastic OT, though it is not clear how the Max-Ent model's descriptive power is related to generative linguistic theories like phonology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A comparison with Max-Ent models",
"sec_num": "7"
},
{
"text": "Although the abundance of well-behaved optimization algorithms has been pointed out in favor of Max-Ent models, it is the author's hope that the MCMC approach also gives Stochastic OT a similar underpinning. However, complex Stochastic OT models often bring worries about identifiability, whereas the convexity property of Max-Ent may be viewed as an advantage 15 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A comparison with Max-Ent models",
"sec_num": "7"
},
{
"text": "From a non-Bayesian perspective, the MCMC-based approach can be seen as a randomized strategy for learning a grammar. Computing resources make it possible to explore the entire space of grammars and discover where good hypotheses are likely to occur. In this paper, we have focused on the frequently visited areas of the hypothesis space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "It is worth pointing out that the Graduate Learning Algorithm can also be seen from this perspective. An examination of the GLA shows that when the plasticity term is fixed, parameters found by GLA also form a Markov chain G (t) ",
"cite_spans": [
{
"start": 225,
"end": 228,
"text": "(t)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "\u2208 R N , t = 1, 2, \u2022 \u2022 \u2022 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "Therefore, assuming the model is identifiable, it seems possible to use GLA in the same way as the MCMC methods: rather than forcing it to stop, we can run GLA until it reaches stationary distribution, if it exists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "However, it is difficult to interpret the results found by this \"random walk-GLA\" approach: the stationary distribution of GLA may not be the target distribution -the posterior p(G|D). To construct a Markov chain that converges to p(G|D), one may consider turning GLA into a real MCMC algorithm by designing reversible jumps, or the Metropolis algorithm. But this may not be easy, due to the difficulty in likelihood evaluation (including likelihood ratio) discussed in Section 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "In contrast, our algorithm provides a general solution to the problem of learning Stochastic OT grammars. Instead of looking for a Markov chain in R N , we go to a higher dimensional space R N \u00d7 R N , using the idea of data augmentation. By taking advantage of the interdependence of G and Y , the Gibbs sampler provides a Markov chain that converges to p(G, Y |D), which allows us to return to the original subspace and derive p(G|D) -the target distribution. Interestingly, by adding more parameters, the computation becomes simpler.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "This work can be extended in two directions. First, it would be interesting to consider other types of OT grammars, in connection with the linguistics literature. For example, the variances of the normal distribution are fixed in the current paper, but they may also be treated as unknown parameters (Nagy and Reynolds, 1997) . Moreover, constraints may be parameterized as mixture distributions, which represent other approaches to using OT for modeling linguistic variation (Anttila, 1997) .",
"cite_spans": [
{
"start": 300,
"end": 325,
"text": "(Nagy and Reynolds, 1997)",
"ref_id": "BIBREF12"
},
{
"start": 476,
"end": 491,
"text": "(Anttila, 1997)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "9"
},
{
"text": "The second direction is to introduce informative priors motivated by linguistic theories. It is found through experimentation that for more sophisticated grammars, identifiability often becomes an issue: some constraints may have multiple modes in their posterior marginal, and it is difficult to extract modes in high dimensions 16 . Therefore, use of priors is needed in order to make more reliable inferences. In addition, priors also have a linguistic appeal, since",
"cite_spans": [
{
"start": 330,
"end": 332,
"text": "16",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "9"
},
{
"text": "Up to translation by an additive constant. 2 Two examples included in the experiment section. See 6.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Notice even computing the gradient is non-trivial. those that observe d. Then we let d vary with its frequency in the data, and obtain a sample of p(Y |G, D); -Once we have the values of Y that respect the ranking relations D, G becomes independent of D. Thus, sampling G from p(G|Y, D) becomes the same as sampling from p(G|Y ).4 Gibbs sampler for the joint posteriorp(G, Y |D)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Other choices of M , e.g. M = 1, lead to more or less the same running time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Here we use y j =i for all components of y except the i-th dimension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Notice the truncated distribution needs to be re-normalized in order to be a proper density.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The implementation of sampling from truncated normals is the same as described in 5.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note G = (\u00b51, \u2022 \u2022 \u2022 , \u00b5N ), and p(\u00b5i|D) is a marginal of p(G|D).10 B &H reported results of averaging many runs of the algorithm. Yet there appears to be significant randomness in each run of the algorithm.11 Within 1000 iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Thanks to Bruce Hayes for pointing out this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The two runs here both use 0.002 and 0.0001 as the final plasticity. The initial plasticity and the iterations are set to 2 and 1.0e7. Slightly better fits can be found by tuning these parameters, but the observation remains the same.14 See(Keller and Asudeh, 2002) for a summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Concerns about identifiability appear much more frequently in statistics than in linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Notice that posterior marginals do not provide enough information for modes of the joint distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "current research on the \"initial bias\" in language acquisition can be formulated as priors (e.g. Faithfulness Low (Hayes, 2004) ) from a Bayesian perspective.Implementing these extensions will merely involve modifying p(G|Y, D), which we leave for future work.",
"cite_spans": [
{
"start": 114,
"end": 127,
"text": "(Hayes, 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Variation in Finnish Phonology and Morphology",
"authors": [
{
"first": "A",
"middle": [],
"last": "Anttila",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anttila, A. (1997). Variation in Finnish Phonology and Mor- phology. PhD thesis, Stanford University.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An analysis of variability in Spanish diminutive formation",
"authors": [
{
"first": "T",
"middle": [],
"last": "Arbisi-Kelm",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arbisi-Kelm, T. (2002). An analysis of variability in Spanish diminutive formation. Master's thesis, UCLA, Los Angeles.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "How we learn variation, optionality, probability",
"authors": [
{
"first": "P",
"middle": [],
"last": "Boersma",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Institute of Phonetic Sciences 21",
"volume": "",
"issue": "",
"pages": "43--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boersma, P. (1997). How we learn variation, optionality, prob- ability. In Proceedings of the Institute of Phonetic Sciences 21, pages 43-58, Amsterdam. University of Amsterdam.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Empirical tests of the Gradual Learning Algorithm",
"authors": [
{
"first": "P",
"middle": [],
"last": "Boersma",
"suffix": ""
},
{
"first": "B",
"middle": [
"P"
],
"last": "Hayes",
"suffix": ""
}
],
"year": 2001,
"venue": "Linguistic Inquiry",
"volume": "32",
"issue": "",
"pages": "45--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boersma, P. and Hayes, B. P. (2001). Empirical tests of the Gradual Learning Algorithm. Linguistic Inquiry, 32:45-86.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Sampling-based approaches to calculating marginal densities",
"authors": [
{
"first": "A",
"middle": [],
"last": "Gelfand",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the American Statistical Association",
"volume": "",
"issue": "410",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gelfand, A. and Smith, A. (1990). Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association, 85(410).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Inference from iterative simulation using multiple sequences",
"authors": [
{
"first": "A",
"middle": [],
"last": "Gelman",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Rubin",
"suffix": ""
}
],
"year": 1992,
"venue": "Statistical Science",
"volume": "7",
"issue": "",
"pages": "457--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gelman, A. and Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, 7:457-472.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images",
"authors": [
{
"first": "S",
"middle": [],
"last": "Geman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Geman",
"suffix": ""
}
],
"year": 1984,
"venue": "IEEE Trans. on Pattern Analysis and Machine Intelligence",
"volume": "6",
"issue": "6",
"pages": "721--741",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geman, S. and Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. on Pattern Analysis and Machine Intelligence, 6(6):721-741.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning OT constraint rankings using a Maximum Entropy model",
"authors": [
{
"first": "S",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Workshop on Variation within Optimality Theory",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goldwater, S. and Johnson, M. (2003). Learning OT constraint rankings using a Maximum Entropy model. In Spenader, J., editor, Proceedings of the Workshop on Variation within Optimality Theory, Stockholm.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Phonological acquisition in optimality theory: The early stages",
"authors": [
{
"first": "B",
"middle": [
"P"
],
"last": "Hayes",
"suffix": ""
}
],
"year": 2004,
"venue": "Fixing Priorities: Constraints in Phonological Acquisition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hayes, B. P. (2004). Phonological acquisition in optimality the- ory: The early stages. In Kager, R., Pater, J., and Zonneveld, W., editors, Fixing Priorities: Constraints in Phonological Acquisition. Cambridge University Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Probabilistic learning algorithms and Optimality Theory",
"authors": [
{
"first": "F",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Asudeh",
"suffix": ""
}
],
"year": 2002,
"venue": "Linguistic Inquiry",
"volume": "33",
"issue": "2",
"pages": "225--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keller, F. and Asudeh, A. (2002). Probabilistic learning algorithms and Optimality Theory. Linguistic Inquiry, 33(2):225-244.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Monte Carlo Strategies in Scientific Computing",
"authors": [],
"year": null,
"venue": "Springer Statistics Series",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Monte Carlo Strategies in Scientific Com- puting. Number 33 in Springer Statistics Series. Springer- Verlag, Berlin.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Optimality theory and variable word-final deletion in Faetar. Language Variation and Change",
"authors": [
{
"first": "N",
"middle": [],
"last": "Nagy",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Reynolds",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nagy, N. and Reynolds, B. (1997). Optimality theory and vari- able word-final deletion in Faetar. Language Variation and Change, 9.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Optimality Theory: Constraint Interaction in Generative Grammar. Forthcoming",
"authors": [
{
"first": "A",
"middle": [],
"last": "Prince",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Smolensky",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prince, A. and Smolensky, P. (1993). Optimality Theory: Con- straint Interaction in Generative Grammar. Forthcoming.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The calculation of posterior distributions by data augmentation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Tanner",
"suffix": ""
},
{
"first": "W",
"middle": [
"H"
],
"last": "Wong",
"suffix": ""
}
],
"year": 1987,
"venue": "Journal of the American Statistical Association",
"volume": "",
"issue": "398",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tanner, M. and Wong, W. H. (1987). The calculation of poste- rior distributions by data augmentation. Journal of the Amer- ican Statistical Association, 82(398).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Posterior marginal samples and histograms for Experiment 2.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "The grammar consists of 4 constraints: ALIGN(TE,Word,R), MAX-OO(V), DEP-IO and BaseTooLittle. The data presents the problem of learning from noise, since no Stochastic OT grammar can provide an exact fit to the data: the candidate [ubita] violates an extra constraint compared to [liri.ito], and [ubasita] violates the same constraint as [liryosito]. Yet unlike [lityosito], [ubasita] is not observed.",
"uris": null
},
"TABREF0": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF2": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF3": {
"text": "The ordering relations D and p(D) computed fromTable 2.",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF4": {
"text": "Conditional sampling steps for p(Y |G, d)",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF5": {
"text": "Data for Ilokano reduplication.",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF6": {
"text": "Data for Spanish diminutive suffixation.",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF8": {
"text": "Comparison of Gibbs sampler and GLA",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF9": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Input</td><td>Output</td><td colspan=\"2\">Obs SOT</td><td>ME</td><td>ME sm</td></tr><tr><td>/uba/</td><td>[ubita]</td><td colspan=\"4\">100% 95% 100% 97.5%</td></tr><tr><td/><td>[ubasita]</td><td>0%</td><td>5%</td><td>0%</td><td>2.5%</td></tr><tr><td>/mar/</td><td>[marEsito]</td><td colspan=\"2\">50% 50%</td><td>50%</td><td>48.8%</td></tr><tr><td/><td>[marsito]</td><td colspan=\"2\">50% 50%</td><td>50%</td><td>51.2%</td></tr><tr><td colspan=\"2\">/liryo/ [liri.ito]</td><td colspan=\"2\">90% 95%</td><td>90%</td><td>91.4%</td></tr><tr><td/><td>[liryosito]</td><td>10%</td><td>5%</td><td>10%</td><td>8.6%</td></tr></table>",
"num": null
},
"TABREF10": {
"text": "Comparison of Max-Ent and Stochastic OT models",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
}
}
}
}