ACL-OCL / Base_JSON /prefixJ /json /J07 /J07-4003.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J07-4003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:03:04.967147Z"
},
"title": "Weighted and Probabilistic Context-Free Grammars Are Equally Expressive",
"authors": [
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {
"postCode": "15217",
"settlement": "Pittsburgh",
"region": "PA",
"country": "USA"
}
},
"email": "nasmith@cs.cmu.edu"
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {
"postCode": "15217",
"settlement": "Pittsburgh",
"region": "PA",
"country": "USA"
}
},
"email": "markjohnson@brown.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This article studies the relationship between weighted context-free grammars (WCFGs), where each production is associated with a positive real-valued weight, and probabilistic context-free grammars (PCFGs), where the weights of the productions associated with a nonterminal are constrained to sum to one. Because the class of WCFGs properly includes the PCFGs, one might expect that WCFGs can describe distributions that PCFGs cannot.",
"pdf_parse": {
"paper_id": "J07-4003",
"_pdf_hash": "",
"abstract": [
{
"text": "This article studies the relationship between weighted context-free grammars (WCFGs), where each production is associated with a positive real-valued weight, and probabilistic context-free grammars (PCFGs), where the weights of the productions associated with a nonterminal are constrained to sum to one. Because the class of WCFGs properly includes the PCFGs, one might expect that WCFGs can describe distributions that PCFGs cannot.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years the field of computational linguistics has turned to machine learning to aid in the development of accurate tools for language processing. A widely used example, applied to parsing and tagging tasks of various kinds, is a weighted grammar. Adding weights to a formal grammar allows disambiguation (more generally, ranking of analyses) and can lead to more efficient parsing. Machine learning comes in when we wish to choose those weights empirically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The predominant approach for many years was to select a probabilistic modelsuch as a hidden Markov model (HMM) or probabilistic context-free grammar (PCFG)-that defined a distribution over the structures allowed by a grammar. Given a treebank, maximum likelihood estimation can be applied to learn the probability values in the model. More recently, new machine learning methods have been developed or extended to handle models of grammatical structure. Notably, conditional estimation (Ratnaparkhi, Roukos, and Ward 1994; Johnson et al. 1999; Lafferty, McCallum, and Pereira 2001) , maximum margin estimation (Taskar et al. 2004) , and unsupervised contrastive estimation (Smith and Eisner 2005) have been applied to structured models. Weighted grammars learned in this way differ in two important ways from traditional, generative models. First, the weights can be any positive value; they need not sum to one. Second, features can \"overlap,\" and it can be difficult to design a generative model that uses such features. The benefits of new features and discriminative training methods are widely documented and recognized.",
"cite_spans": [
{
"start": 486,
"end": 522,
"text": "(Ratnaparkhi, Roukos, and Ward 1994;",
"ref_id": "BIBREF14"
},
{
"start": 523,
"end": 543,
"text": "Johnson et al. 1999;",
"ref_id": "BIBREF6"
},
{
"start": 544,
"end": 581,
"text": "Lafferty, McCallum, and Pereira 2001)",
"ref_id": "BIBREF8"
},
{
"start": 610,
"end": 630,
"text": "(Taskar et al. 2004)",
"ref_id": "BIBREF17"
},
{
"start": 673,
"end": 696,
"text": "(Smith and Eisner 2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "This article focuses specifically on the first of these differences. It compares the expressive power of weighted context-free grammars (WCFGs), where each rule is associated with a positive weight, to that of the corresponding PCFGs, that is, with the same rules but where the weights of the rules expanding a nonterminal must sum to one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "One might expect that because normalization removes one or more degrees of freedom, unnormalized models should be more expressive than normalized, probabilistic models. Perhaps counterintuitively, previous work has shown that the classes of probability distributions defined by WCFGs and PCFGs are the same (Abney, McAllester, and Pereira 1999; Chi 1999) .",
"cite_spans": [
{
"start": 307,
"end": 344,
"text": "(Abney, McAllester, and Pereira 1999;",
"ref_id": "BIBREF0"
},
{
"start": 345,
"end": 354,
"text": "Chi 1999)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "However, this result does not completely settle the question about the expressive power of WCFGs and PCFGs. As we show herein, a WCFG can define a conditional distribution from strings to trees even if it does not define a probability distribution over trees. Because these conditional distributions are what are used in classification tasks and related tasks such as parsing, we need to know the relationship between the classes of conditional distributions defined by WCFGs and PCFGs. In fact we extend the results of Chi and of Abney et al., and show that WCFGs and PCFGs both define the same class of conditional distribution. Moreover, we present an algorithm for converting an arbitrary WCFG that defines a conditional distribution over trees given strings but possibly without a finite partition function into a PCFG with the same rules as the WCFG and that defines the same conditional distribution over trees given strings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "This means that maximum conditional likelihood WCFGs are non-identifiable, because there are an infinite number of rule weights all of which maximize the conditional likelihood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "A CFG G is a tuple N, S, \u03a3, R where N is a finite set of nonterminal symbols, S \u2208 N is the start symbol, \u03a3 is a finite set of terminal symbols (disjoint from N), and R is a set of production rules of the form X \u2192 \u03b1 where X \u2208 N and \u03b1 \u2208 (N \u222a \u03a3) . A WCFG associates a positive number called the weight with each rule in R. 1 We denote by \u03b8 X\u2192\u03b1 the weight attached to the rule X \u2192 \u03b1, and the vector of rule weights by \u0398 = {\u03b8 A\u2192\u03b1 : A \u2192 \u03b1 \u2208 R}. A weighted grammar is the pair G \u0398 = G, \u0398 .",
"cite_spans": [
{
"start": 320,
"end": 321,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CFGs",
"sec_num": "2."
},
{
"text": "Unless otherwise specified, we assume a fixed underlying context-free grammar G.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CFGs",
"sec_num": "2."
},
{
"text": "Let \u2126(G) be the set of (finite) trees that G generates. For any \u03c4 \u2208 \u2126(G), the score s \u0398 (\u03c4) of \u03c4 is defined as follows: X\u2192\u03b1;\u03c4) (1)",
"cite_spans": [
{
"start": 120,
"end": 126,
"text": "X\u2192\u03b1;\u03c4)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CFGs",
"sec_num": "2."
},
{
"text": "s \u0398 (\u03c4) = (X\u2192\u03b1)\u2208R (\u03b8 X\u2192\u03b1 ) f (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CFGs",
"sec_num": "2."
},
{
"text": "where f (X \u2192 \u03b1; \u03c4) is the number of times X \u2192 \u03b1 is used in the derivation of the tree \u03c4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CFGs",
"sec_num": "2."
},
{
"text": "The partition function Z(\u0398) is the sum of the scores of the trees in \u2126(G).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CFGs",
"sec_num": "2."
},
{
"text": "Z(\u0398) = \u03c4\u2208\u2126(G) s \u0398 (\u03c4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CFGs",
"sec_num": "2."
},
{
"text": "Because we have imposed no constraints on \u0398, the partition function need not equal one; indeed, as we show subsequently the partition function need not even exist. If Z(\u0398) is finite then we say that the WCFG is convergent, and we can define a Gibbs probability distribution over \u2126(G) by dividing by Z(\u0398):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CFGs",
"sec_num": "2."
},
{
"text": "P \u0398 (\u03c4) = s \u0398 (\u03c4) Z(\u0398)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CFGs",
"sec_num": "2."
},
{
"text": "A probabilistic CFG, or PCFG, is a WCFG in which the sum of the weights of the rules expanding each nonterminal is one:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CFGs",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2200X \u2208 N, (X\u2192\u03b1)\u2208R \u03b8 X\u2192\u03b1 = 1",
"eq_num": "( 2 )"
}
],
"section": "Weighted CFGs",
"sec_num": "2."
},
{
"text": "It is easy to show that if G \u0398 is a PCFG then Z(\u0398) \u2264 1. A tight PCFG is a PCFG G \u0398 for which Z(\u0398) = 1. Necessary conditions and sufficient conditions for a PCFG to be tight are given in several places, including Booth and Thompson (1973) and Wetherell (1980) . We now describe the results of Chi (1999) and Abney, McAllester, and Pereira (1999) . Let G = {G \u0398 } denote the set of the WCFGs based on the CFG G (i.e., the WCFGs in G all have the same underlying grammar G but differ in their rule weight vectors \u0398). Let G Z<\u221e be the subset of G for which the partition function Z(\u0398) is finite, and let G Z=\u221e = G \\ G Z<\u221e be the subset of G with an infinite partition function. Further let G PCFG denote the set of PCFGs based on G. In general, G PCFG is a proper subset of G Z<\u221e , that is, every PCFG is also a WCFG, but because there are weight vectors \u0398 that don't obey Equation 2, not all WCFGs are PCFGs.",
"cite_spans": [
{
"start": 212,
"end": 237,
"text": "Booth and Thompson (1973)",
"ref_id": null
},
{
"start": 242,
"end": 258,
"text": "Wetherell (1980)",
"ref_id": "BIBREF19"
},
{
"start": 292,
"end": 302,
"text": "Chi (1999)",
"ref_id": "BIBREF2"
},
{
"start": 307,
"end": 344,
"text": "Abney, McAllester, and Pereira (1999)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CFGs",
"sec_num": "2."
},
{
"text": "However, this does not mean that WCFGs are more expressive than PCFGs. As noted above, the WCFGs G Z<\u221e define Gibbs distributions. Again, for a fixed G, let P Z<\u221e be the probability distributions over the trees \u2126(G) defined by the WCFGs G Z<\u221e and let P PCFG be the probability distributions defined by the PCFGs G PCFG . Chi (Proposition 4) and Abney, McAllester, and Pereira (Lemma 5) showed that P Z<\u221e = P PCFG , namely, that every WCFG probability distribution is in fact generated by some PCFG. There is no \"P Z=\u221e \" because there is no finite normalizing term Z(\u0398) for such WCFGs. Chi (1999) describes an algorithm for converting a WCFG to an equivalent PCFG. Let G \u0398 be a WCFG in G Z<\u221e . If X \u2208 N is a nonterminal, let \u2126 X (G) be the set of trees rooted in X that can be built using G. Then define:",
"cite_spans": [
{
"start": 345,
"end": 351,
"text": "Abney,",
"ref_id": null
},
{
"start": 352,
"end": 363,
"text": "McAllester,",
"ref_id": null
},
{
"start": 364,
"end": 385,
"text": "and Pereira (Lemma 5)",
"ref_id": null
},
{
"start": 585,
"end": 595,
"text": "Chi (1999)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CFGs",
"sec_num": "2."
},
{
"text": "Z X (\u0398) = \u03c4\u2208\u2126 X (G) s \u0398 (\u03c4) For simplicity, let Z t (\u0398) = 1 for all t \u2208 \u03a3. Chi demonstrated that G \u0398 \u2208 G Z<\u221e implies that Z X (\u0398) is finite for all X \u2208 N \u222a \u03a3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chi's Algorithm for Converting WCFGs to Equivalent PCFGs",
"sec_num": "2.1"
},
{
"text": "For every rule X \u2192 \u03b1 in R define:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chi's Algorithm for Converting WCFGs to Equivalent PCFGs",
"sec_num": "2.1"
},
{
"text": "\u03b8 X\u2192\u03b1 = \u03b8 X\u2192\u03b1 |\u03b1| i=1 Z \u03b1 i (\u0398) Z X (\u0398)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chi's Algorithm for Converting WCFGs to Equivalent PCFGs",
"sec_num": "2.1"
},
{
"text": "where \u03b1 i is the ith element of \u03b1 and |\u03b1| is the length of \u03b1. Chi proved that G \u0398 is a PCFG and that P \u0398 (\u03c4) = s \u0398 (\u03c4)/Z(\u0398) for all trees \u03c4 \u2208 \u2126(G).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chi's Algorithm for Converting WCFGs to Equivalent PCFGs",
"sec_num": "2.1"
},
{
"text": "Chi did not describe how to compute the nonterminal-specific partition functions Z X (\u0398). The Z X (\u0398) are related by equations of the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chi's Algorithm for Converting WCFGs to Equivalent PCFGs",
"sec_num": "2.1"
},
{
"text": "Z X (\u0398) = \u03b1:X\u2192\u03b1\u2208R \u03b8 X\u2192\u03b1 |\u03b1| i=1 Z \u03b1 i (\u0398)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chi's Algorithm for Converting WCFGs to Equivalent PCFGs",
"sec_num": "2.1"
},
{
"text": "which constitute a set of nonlinear polynomial equations in Z X (\u0398). Although a numerical solver might be employed to find the Z X (\u0398), we have found that in practice iterative propagation of weights following the method described by Stolcke (1995, Section 4.7.1) converges quickly when Z(\u0398) is finite.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chi's Algorithm for Converting WCFGs to Equivalent PCFGs",
"sec_num": "2.1"
},
{
"text": "A common application of weighted grammars is parsing. One way to select a parse tree for a sentence x is to choose the maximum weighted parse that is consistent with the observation x:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c4 * (x) = argmax \u03c4\u2208\u2126(G):y(\u03c4)=x s \u0398 (\u03c4)",
"eq_num": "( 3 )"
}
],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "where y(\u03c4) is the yield of \u03c4. Other decision criteria exist, including minimum-loss decoding and re-ranked n-best decoding. All of these classifiers use some kind of dynamic programming algorithm to optimize over trees, and they also exploit the conditional distribution of trees given sentence observations. A WCFG defines such a conditional distribution as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P \u0398 (\u03c4 | x) = s \u0398 (\u03c4) \u03c4 \u2208\u2126(G):y(\u03c4 )=x s \u0398 (\u03c4 ) = s \u0398 (\u03c4) Z x (\u0398)",
"eq_num": "(4)"
}
],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "where Z x (\u0398) is the sum of scores for all parses of x. Note that Equation (4) will be ill-defined when Z x (\u0398) diverges. Because Z x (\u0398) is constant for a given x, solving Equation (3) is equivalent to choosing \u03c4 to maximize P \u0398 (\u03c4 | x).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "We turn now to classes of these conditional distribution families. Let C Z<\u221e (C PCFG ) be the class of conditional distribution families that can be expressed by grammars in G Z<\u221e (G PCFG , respectively). It should be clear that, because P Z<\u221e = P PCFG , C Z<\u221e = C PCFG since a conditional family is derived by normalizing a joint distribution by its marginals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "We now define another subset of G. Let G Z n <\u221e contain every WCFG",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "G \u0398 = G, \u0398 such that, for all n \u2265 0, Z n (\u0398) = \u03c4\u2208\u2126(G):|y(\u03c4)|=n s \u0398 (\u03c4) < \u221e",
"eq_num": "(5)"
}
],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "(Note that, to be fully rigorous, we should quantify n in G Z n <\u221e , writing \"G \u2200nZ n (\u0398)<\u221e .\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "We use the abbreviated form to keep the notation crisp.) For any G \u0398 \u2208 G Z n <\u221e , it also follows that, for any x \u2208 L(G), Z x (\u0398) < \u221e; the converse holds as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "It follows that any WCFG in G Z n <\u221e can be used to construct a conditional distribution of trees given the sentence, for any sentence x \u2208 L(G). To do so, we only need to normalize s \u0398 (\u03c4) by Z x (\u0398) (Equation 4). Let G Z n =\u221e contain the WCFGs where some Z n (\u0398) diverge; this is a subset of G Z=\u221e . 2 To see that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "G Z=\u221e \u2229 G Z n <\u221e = \u2205, consider Example 1. Example 1 \u03b8 A\u2192A A = 1, \u03b8 A\u2192a = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "This grammar produces binary structures over strings in a + . Every such tree receives score 1. Because there are infinitely many trees, Z(\u0398) diverges. But for any fixed string a n , the number of parse trees is finite. This grammar defines a uniform conditional distribution over all binary trees, given the string. For a grammar G \u0398 to be in G Z n <\u221e , it is sufficient that, for every nonterminal X \u2208 N, the sum of scores of all cyclic derivations X \u21d2 + X be finite. Conservatively, this can be forced by eliminating epsilon rules and unary rules or cycles altogether, or by requiring the sum of cyclic derivations for every nonterminal X to sum to strictly less than one. Example 2 gives a grammar in G Z n =\u221e with a unary cyclic derivation that does not \"dampen.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "Example 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "\u03b8 A\u2192A A = 1, \u03b8 A\u2192A = 1, \u03b8 A\u2192a = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "For any given a n , there are infinitely many equally weighted parse trees, so even the set of trees for a n cannot be normalized into a distribution (Z n (\u0398) = \u221e). Generally speaking, if there exists a string x \u2208 L(G) such that the set of trees that derive x is not finite (i.e., there is no finite bound on the number of derivations for strings in L(G); the grammar in Example 2 is a simple example), then G Z n <\u221e and G Z<\u221e are separable. 3 For a given CFG G, a conditional distribution over trees given strings is a function \u03a3 * \u2192 (\u2126(G) \u2192 [0, 1]). Our notation for the set of conditional distributions that can be expressed by G Z n <\u221e is C Z n <\u221e . Note that there is no \"C Z n =\u221e \" because an infinite Z n (\u0398) implies an infinite Z(x) for some sentence x and therefore an ill-formed conditional family. Indeed, it is difficult to imagine a scenario in computational linguistics in which nondampening cyclic derivations (WCFGs in G Z n =\u221e ) are desirable, because no linguistic explanations depend crucially on arbitrary lengthening of cyclic derivations.",
"cite_spans": [
{
"start": 442,
"end": 443,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "We now state our main theorem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers and Conditional Distributions",
"sec_num": "3."
},
{
"text": "For a given CFG G, C Z n <\u221e = C Z<\u221e .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theorem 1",
"sec_num": null
},
{
"text": "Suppose we are given weights \u0398 for G such that G \u0398 \u2208 G Z n <\u221e . We will show that the sequence Z 1 (\u0398), Z 2 (\u0398), ... is bounded by an exponential function of n, then describe a transformation on \u0398 resulting in a new grammar G \u0398 that is in G Z<\u221e and defines the same family of conditional distributions (i.e., \u2200\u03c4 \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "\u2126(G), \u2200x \u2208 L(G), P \u0398 (\u03c4 | x) = P \u0398 (\u03c4 | x)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "First we prove that for all n \u2265 1 there exists some c such that Z n (\u0398) \u2264 c n . Given G \u0398 , we construct\u1e20\u0398 in CNF that preserves the total score for any x \u2208 L(G). The existence of\u1e20\u0398 was demonstrated by Goodman (1998, Section 2.6), who gives an algorithm for constructing the value-preserving weighted grammar\u1e20\u0398 from G \u0398 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "Note that\u1e20 = N , S, \u03a3,R , containing possibly more nonterminals and rules than G. The set of (finite) trees \u2126 \u1e20 is different from \u2126(G); the new trees must be binary and may include new nonterminals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "Next, collapse the nonterminals inN into one nonterminal, S. The resulting grammar isG\u0398 = {S}, S, \u03a3,\u0212 ,\u0398 .\u0212 contains the rule S \u2192 S S and rules of the form S \u2192 a for a \u2208 \u03a3. The weights of these rules ar\u0207",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 S\u2192S S = \u03b2 = max(1, (X\u2192Y Z)\u2208R\u03b8 X\u2192Y Z )",
"eq_num": "( 6 )"
}
],
"section": "Proof",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 S\u2192a = \u03c5 = max(1, (X\u2192b)\u2208R\u03b8 X\u2192b )",
"eq_num": "( 7 )"
}
],
"section": "Proof",
"sec_num": null
},
{
"text": "The grammarG\u0398 will allow every tree allowed by\u1e20\u0398 (modulo labels on nonterminal nodes, which are now all S). It may allow some additional trees. The score of a tree underG\u0398 will be at least as great as the sum of scores of all structurally equivalent trees under\u1e20\u0398, because \u03b2 and \u03c5 are defined to be large enough to absorb all such scores. It follows that, for all x \u2208 L(G):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s\u0398(x) \u2265 s\u0398(x) = s \u0398 (x)",
"eq_num": "( 8 )"
}
],
"section": "Proof",
"sec_num": null
},
{
"text": "Summing over all trees of any given yield length n, we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Z n (\u0398) \u2265 Z n (\u0398) = Z n (\u0398)",
"eq_num": "( 9 )"
}
],
"section": "Proof",
"sec_num": null
},
{
"text": "G generates all possible binary trees (with internal nodes undifferentiated) over a given sentence x in L(G). Every tree generated byG with yield length n will have the same score: \u03b2 n\u22121 \u03c5 n , because every binary tree with n terminals has exactly n \u2212 1 nonterminals. Each tree corresponds to a way of bracketing n items, so the total number of parse trees generated byG for a string of length n is the number of different ways of bracketing a sequence of n items. The total number of unlabeled binary bracketings of an n-length sequence is the nth Catalan number C n (Graham, Knuth, and Patashnik 1994) , which in turn is bounded above by 4 n (Vardi 1991) . The total number of strings of length n is |\u03a3| n . Therefore",
"cite_spans": [
{
"start": 568,
"end": 603,
"text": "(Graham, Knuth, and Patashnik 1994)",
"ref_id": "BIBREF4"
},
{
"start": 644,
"end": 656,
"text": "(Vardi 1991)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Z n (\u0398) = C n |\u03a3| n \u03b2 n\u22121 \u03c5 n \u2264 4 n |\u03a3| n \u03b2 n\u22121 \u03c5 n \u2264 (4|\u03a3|\u03b2\u03c5) n",
"eq_num": "(10)"
}
],
"section": "Proof",
"sec_num": null
},
{
"text": "We now transform the original weights \u0398 as follows. For every rule (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X \u2192 \u03b1) \u2208 R, let \u03b8 X\u2192\u03b1 \u2190 \u03b8 X\u2192\u03b1 (8|\u03a3|\u03b2\u03c5) t(\u03b1)",
"eq_num": "(11)"
}
],
"section": "Proof",
"sec_num": null
},
{
"text": "where t(\u03b1) is the number of \u03a3 symbols appearing in \u03b1. This transformation results in every n-length sentence having its score divided by (8|\u03a3|\u03b2\u03c5) n . The relative scores of trees with the same yield are unaffected, because they are all scaled equally. Therefore G \u0398 defines the same conditional distribution over trees given sentences as G \u0398 , which implies that G \u0398 and G \u0398 have the same highest scoring parses. Note that any sufficiently large value could stand in for 8|\u03a3|\u03b2\u03c5 to both (a) preserve the conditional distribution and (b) force Z n (\u0398) to converge. We have not found the minimum such value, but 8|\u03a3|\u03b2\u03c5 is sufficiently large. The sequence of Z n (\u0398) now converges:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Z n (\u0398 ) \u2264 Z n (\u0398) (8|\u03a3|\u03b2\u03c5) n \u2264 1 2 n",
"eq_num": "(12)"
}
],
"section": "Proof",
"sec_num": null
},
{
"text": "Hence Z(\u0398 ) = \u221e n=0 Z n (\u0398 ) \u2264 2 and G \u0398 \u2208 G Z<\u221e .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "Given a CFG G, C Z n <\u221e = C PCFG .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corollary 1",
"sec_num": null
},
{
"text": "By Theorem 1, C Z n <\u221e = C Z<\u221e . We know that P Z<\u221e = P PCFG , from which it follows that C Z<\u221e = C PCFG . Hence C Z n <\u221e = C PCFG . To convert a WCFG in C Z n <\u221e into a PCFG, first apply the transformation in the proof of Theorem 1 to get a convergent WCFG, then apply Chi's method (our Section 2.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "A graphical depiction of the primary result of this article. Given a fixed set of productions, G is the set of WCFGs with exactly those productions (i.e., they vary only in the production weights), G Z<\u221e is the subset of G that defines (joint) probability distributions over trees (i.e., that have a finite partition function Z) and P Z<\u221e is the set of probability distributions defined by grammars in G Z<\u221e . Chi (1999) and Abney, McAllester, and Pereira (1999) proved that P Z<\u221e is the same as P PCFG , the set of probability distributions defined by the PCFG G PCFG with the same productions as G. Thus even though the set of WCFGs properly includes the set of PCFGs, WCFGs define exactly the same probability distributions over trees as PCFGs. This article extends these results to conditional distributions over trees conditioned on their strings. Even though the set G Z n <\u221e of WCFGs that define conditional distributions may be larger than G Z<\u221e and properly includes G PCFG , the set of conditional distributions C Z n <\u221e defined by G Z n <\u221e is equal to the set of conditional distributions C PCFG defined by PCFGs. Our proof is constructive: we give an algorithm which takes as input a WCFG G \u2208 G Z n <\u221e and returns a PCFG which defines the same conditional distribution over trees given strings as G. Figure 1 presents the main result graphically in the context of earlier results.",
"cite_spans": [
{
"start": 410,
"end": 420,
"text": "Chi (1999)",
"ref_id": "BIBREF2"
},
{
"start": 425,
"end": 462,
"text": "Abney, McAllester, and Pereira (1999)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 1312,
"end": 1320,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 1",
"sec_num": null
},
{
"text": "Hidden Markov models (HMMs) are a special case of PCFGs. The structures they produce are labeled sequences, which are equivalent to right-branching trees. We can write an HMM as a PCFG with restricted types of rules. We will refer to the unweighted, finite-state grammars that HMMs stochasticize as \"right-linear grammars.\" Rather than using the production rule notation of PCFGs, we will use more traditional HMM notation and refer to states (interchangeable with nonterminals) and paths (interchangeable with parse trees).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMMs and Related Models",
"sec_num": "4."
},
{
"text": "In the rest of the article we distinguish between HMMs, which are probabilistic finite-state automata locally normalized just like a PCFG, and chain-structured Markov random fields (MRFs; Section 4.1), in which moves or transitions are associated with positive weights and which are globally normalized like a WCFG. 4 We also distinguish two different types of dependency structures in these automata. Abusing the standard terminology somewhat, in a Mealy automaton arcs are labeled with output or terminal symbols, whereas in a Moore automaton the states emit terminal symbols. 5 A Mealy HMM defines a probability distribution over pairs x, \u03c0 , where x is a length-n sequence x 1 , x 2 , ..., x n \u2208 \u03a3 n and \u03c0 = \u03c0 0 , \u03c0 1 , \u03c0 2 , ..., \u03c0 n \u2208 N n+1 is a state (or nonterminal) path. The distribution is given by",
"cite_spans": [
{
"start": 316,
"end": 317,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HMMs and Related Models",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P HMM ( x, \u03c0) = n i=1 p(x i , \u03c0 i | \u03c0 i\u22121 ) p(STOP | \u03c0 n )",
"eq_num": "(13)"
}
],
"section": "HMMs and Related Models",
"sec_num": "4."
},
{
"text": "\u03c0 0 is assumed, for simplicity, to be constant and known; we also assume that every state transition emits a symbol (no arcs), an assumption made in typical tagging and chunking applications of HMMs. We can convert a Mealy HMM to a PCFG by including, for every tuple x, \u03c0, \u03c6 (x \u2208 \u03a3 and \u03c0, \u03c6 \u2208 N) such that p(x, \u03c0 | \u03c6) > 0, the rule \u03c0 \u2192 x \u03c6, with the same probability as the corresponding HMM transition. For every \u03c0 such that p(STOP | \u03c0), we include the rule \u03c0 \u2192 , with probability p(STOP | \u03c0). A Moore HMM factors the distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMMs and Related Models",
"sec_num": "4."
},
{
"text": "p(x, \u03c0 | \u03c6) into p(x | \u03c0) \u2022 p(\u03c0 | \u03c6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMMs and Related Models",
"sec_num": "4."
},
{
"text": ". A Moore HMM can be converted to a PCFG by adding a new nonterminal\u03c0 for every state \u03c0 and including the rules \u03c6 \u2192\u03c0 (with probability p(\u03c0 | \u03c6)) and\u03c0 \u2192 x \u03c0 (with probability p(x | \u03c0)). Stop probabilities are added as in the Mealy case. For a fixed number of states, Moore HMMs are less probabilistically expressive than Mealy HMMs, though we can convert between the two with a change in the number of states.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMMs and Related Models",
"sec_num": "4."
},
{
"text": "We consider Mealy HMMs primarily from here on. If we wish to define the distribution over paths given words, we conditionalize",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMMs and Related Models",
"sec_num": "4."
},
{
"text": "P HMM ( \u03c0 | x) = n i=1 p(x i , \u03c0 i | \u03c0 i\u22121 ) p(STOP | \u03c0 n ) \u03c0 \u2208N n+1 n i=1 p(x i , \u03c0 i | \u03c0 i\u22121 ) p(STOP | \u03c0 n ) (14)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMMs and Related Models",
"sec_num": "4."
},
{
"text": "This is how scores are assigned when selecting the best path given a sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMMs and Related Models",
"sec_num": "4."
},
{
"text": "For a grammar G that is right-linear, we can therefore talk about the set of HMM (right-linear) grammars G HMM , the set of probability distributions P HMM defined by those grammars, and C HMM , the set of conditional distributions over state paths (trees) that they define. 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMMs and Related Models",
"sec_num": "4."
},
{
"text": "When the probabilities in Mealy HMMs are replaced by arbitrary positive weights, the production rules can be seen as features in a Gibbs distribution. The resulting model is a type of MRF with a chain structure; these have recently become popular in natural language processing (Lafferty, McCallum, and Pereira 2001) . Lafferty et al.'s formulation defined a conditional distribution over paths given sequences by normalizing for each sequence x:",
"cite_spans": [
{
"start": 278,
"end": 316,
"text": "(Lafferty, McCallum, and Pereira 2001)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mealy Markov Random Fields",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P CMRF ( \u03c0 | x) = n i=1 \u03b8 \u03c0 i\u22121 ,x i ,\u03c0 i \u03b8 \u03c0 n ,STOP Z x (\u0398)",
"eq_num": "(15)"
}
],
"section": "Mealy Markov Random Fields",
"sec_num": "4.1"
},
{
"text": "Using a single normalizing term Z(\u0398), we can also define a joint distribution over states and paths:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mealy Markov Random Fields",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P CMRF ( x, \u03c0) = n i=1 \u03b8 \u03c0 i\u22121 ,x i ,\u03c0 i \u03b8 \u03c0 n ,STOP Z(\u0398)",
"eq_num": "(16)"
}
],
"section": "Mealy Markov Random Fields",
"sec_num": "4.1"
},
{
"text": "Let G = {G \u0398 } denote the set of weighted grammars based on the unweighted rightlinear grammar G. We call these weighted grammars \"Mealy MRFs.\" As in the WCFG case, we can add the constraint Z n (\u0398) < \u221e (for all n), giving the class G Z n <\u221e .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mealy Markov Random Fields",
"sec_num": "4.1"
},
{
"text": "Recall that, in the WCFG case, the move from G to G Z n <\u221e had to do with cyclic derivations. The analogous move in the right-linear grammar case involves emissions (production rules of the form X \u2192 Y). If, as in typical applications of finite-state models to natural language processing, there are no rules of the form X \u2192 Y, then G Z n <\u221e is empty and G Z n <\u221e = G. Our formulae, in fact, assume that there are no emissions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mealy Markov Random Fields",
"sec_num": "4.1"
},
{
"text": "Because Mealy MRFs are a special case of WCFGs, Theorem 1 applies to them. This means that any random field using Mealy HMM features (Mealy MRF) such that \u2200n, Z n (\u0398) < \u221e can be transformed into a Mealy HMM that defines the same conditional distribution of tags given words. 7",
"cite_spans": [
{
"start": 133,
"end": 144,
"text": "(Mealy MRF)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mealy Markov Random Fields",
"sec_num": "4.1"
},
{
"text": "For a given right-linear grammar G,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corollary 2",
"sec_num": null
},
{
"text": "C HMM = C Z<\u221e = C Z n <\u221e .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corollary 2",
"sec_num": null
},
{
"text": "Lafferty, McCallum, and Pereira's conditional random fields are typically trained to optimize a different objective function than HMMs (conditional likelihood and joint likelihood, respectively). Our result shows that optimizing either objective on the set of Mealy HMMs as opposed to Mealy MRFs will achieve the same result, modulo imperfections in the numerical search for parameter values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corollary 2",
"sec_num": null
},
{
"text": "While HMMs and chain MRFs represent the same set of conditional distributions, we can show that the maximum-entropy Markov models (MEMMs) of McCallum, Freitag, and Pereira (2000) represent a strictly smaller class of distributions.",
"cite_spans": [
{
"start": 141,
"end": 178,
"text": "McCallum, Freitag, and Pereira (2000)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum-Entropy Markov Models",
"sec_num": "4.2"
},
{
"text": "An MEMM is a similar model with a different event structure. It defines the distribution over paths given words as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum-Entropy Markov Models",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P MEMM ( \u03c0 | x) = n i=1 p(\u03c0 i | \u03c0 i\u22121 , x i )",
"eq_num": "(17)"
}
],
"section": "Maximum-Entropy Markov Models",
"sec_num": "4.2"
},
{
"text": "Unlike an HMM, the MEMM does not define a distribution over output sequences x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum-Entropy Markov Models",
"sec_num": "4.2"
},
{
"text": "The name \"maximum entropy Markov model\" comes from the fact that the conditional distributions p(\u2022 | \u03c0, x) typically have a log-linear form, rather than a multinomial form, and are trained to maximize entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum-Entropy Markov Models",
"sec_num": "4.2"
},
{
"text": "For every MEMM, there is a Mealy MRF that represents the same conditional distribution over paths given symbols.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lemma 1",
"sec_num": null
},
{
"text": "By definition, the features of the MRF include triples \u03c0 i\u22121 , x i , \u03c0 i . Assign to the weight \u03b8 \u03c0 i ,x j ,\u03c0 k the value P MEMM (\u03c0 i | \u03c0 k , x j ). Assign to \u03b8 \u03c0 i ,STOP the value 1. In computing P CMRF (\u03c0 | x) (Equation 15), the normalizing term for each x will be equal to 1. MEMMs, like HMMs, are defined by locally normalized conditional multinomial distributions. This has computational advantages (no potentially infinite Z(\u0398) terms to compute). However, the set of conditional distributions of labels given terminals that can be expressed by MEMMs is strictly smaller than those expressible by HMMs (and by extension, Mealy MRFs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "For a given right-linear grammar G, C MEMM \u2282 C HMM .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theorem 2",
"sec_num": null
},
{
"text": "We give an example of a Mealy HMM whose conditional distribution over paths (trees) given sentences cannot be represented by an MEMM. We thank Michael Collins for pointing out to us the existence of examples like this one. Define a Mealy HMM with three states named 0, 1, and 2, over an alphabet {a, b, c}, as follows. State 0 is the start state.",
"cite_spans": [
{
"start": 76,
"end": 83,
"text": "(trees)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "Under this model, P HMM (0, 1, 1 | a, b) = P HMM (0, 2, 2 | a, c) = 1. These conditional distributions cannot both be met by any MEMM. To see why, consider",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3",
"sec_num": null
},
{
"text": "p(1 | 0, a) \u2022 p(1 | 1, b) = p(2 | 0, a) \u2022 p(2 | 2, c) = 1 This implies that p(1 | 0, a) = p(1 | 1, b) = p(2 | 0, a) = p(2 | 2, c) = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3",
"sec_num": null
},
{
"text": "But it is impossible for p(1 | 0, a) = p(2 | 0, a) = 1. This holds regardless of the form of the distribution p(\u2022 | \u03c0, x) (e.g., multinomial or log-linear).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3",
"sec_num": null
},
{
"text": "Because P(0, 1, 1 | a, b) = P(0, 2, 2 | a, c) cannot be met by any MEMM, there are distributions in the family allowed by HMMs that cannot be expressed as MEMMs, and the latter are less expressive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3",
"sec_num": null
},
{
"text": "It is important to note that this result applies to Mealy HMMs; our result compares models with the same dependencies among random variables. If the HMM's distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3",
"sec_num": null
},
{
"text": "p(x i , \u03c0 i | \u03c0 i\u22121 ) is factored into p(x i | \u03c0 i ) \u2022 p(\u03c0 i | \u03c0 i\u22121 ) (i.e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3",
"sec_num": null
},
{
"text": "., it is a Moore HMM), then there may exist an MEMM with the same number of states that can represent some distributions that the Moore HMM cannot. 8 One can also imagine MEMMs in which p(\u03c0",
"cite_spans": [
{
"start": 130,
"end": 149,
"text": "Moore HMM cannot. 8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3",
"sec_num": null
},
{
"text": "i | \u03c0 i\u22121 , x i , ...)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3",
"sec_num": null
},
{
"text": "is conditioned on more surrounding context (x i\u22121 or x i+1 , or the entire sequence x, for example). Conditioning on more context can be done by increasing the order of the Markov model-all of our models so far have been first-order, with a memory of only the previous state. Our result can be extended to include higher-order MEMMs. Suppose we allow the MEMM to \"look ahead\" n words, factoring its distribution into p(\u03c0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3",
"sec_num": null
},
{
"text": "i | \u03c0 i\u22121 , x i , x i+1 , ..., x i+n ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3",
"sec_num": null
},
{
"text": "A first-order Mealy HMM can represent some classifiers that no MEMM with finite lookahead can represent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corollary 3",
"sec_num": null
},
{
"text": "Consider again Example 3. Note that, for all m \u2265 1, it sets",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "P HMM (0, m 1's 1, ..., 1 | a m b) = 1 P HMM (0, 2, ..., 2 m 2's | a m c) = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "Suppose we wish to capture this in an MEMM with n symbols of look-ahead. Letting",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "m = n + 1, p(1 | 0, a n+1 ) \u2022 p(1 | 1, a n b) \u2022 n i=1 p(1 | 1, a n\u2212i b) = 1 p(2 | 0, a n+1 ) \u2022 p(2 | 2, a n c) \u2022 n i=1 p(2 | 2, a n\u2212i c) = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "The same issue arises as in the proof of Theorem 2: it cannot be that p(1 | 0, a n+1 ) = p(2 | 0, a n+1 ) = 1, and so this MEMM does not exist. Note that even if we allow the MEMM to \"look back\" and condition on earlier symbols (or states), it cannot represent the distribution in Example 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "Generally speaking, this limitation of MEMMs has nothing to do with the estimation procedure (we have committed to no estimation procedure in particular) but rather with the conditional structure of the model. That some model structures work better than others at real NLP tasks was discussed by Johnson (2001) and Klein and Manning (2002) . Our result-that the class of distributions allowed by MEMMs is a strict subset of those allowed by Mealy HMMs-makes this unsurprising.",
"cite_spans": [
{
"start": 296,
"end": 310,
"text": "Johnson (2001)",
"ref_id": "BIBREF5"
},
{
"start": 315,
"end": 339,
"text": "Klein and Manning (2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proof",
"sec_num": null
},
{
"text": "Our result is that weighted generalizations of classical probabilistic grammars (PCFGs and HMMs) are no more powerful than the probabilistic models. This means that, insofar as log-linear models for NLP tasks like tagging and parsing are more successful than their probabilistic cousins, it is due to either (a) additional features added to the model, (b) improved estimation procedures (e.g., maximum conditional likelihood estimation or contrastive estimation), or both. (Note that the choice of estimation procedure (b) is in principle orthogonal to the choice of model, and conditional estimation should not be conflated with log-linear modeling.) For a given estimation criterion, weighted CFGs, and Mealy MRFs, in particular, cannot be expected to behave any differently than PCFGs and HMMs, respectively, unless they are augmented with more features. Abney, McAllester, and Pereira (1999) addressed the relationship between PCFGs and probabilistic models based on push-down automaton operations (e.g., the structured language model of Chelba and Jelinek, 1998) . They proved that, although the conversion may not be simple (indeed, a blow-up in the automaton's size may be incurred), given G, P PCFG and the set of distributions expressible by shift-reduce probabilistic push-down automata are weakly equivalent. Importantly, the standard conversion of a CFG into a shift-reduce PDA, when applied in the stochastic case, does not always preserve the probability distribution over trees. Our Theorem 2 bears a resemblance to that result. Further work on the relationship between weighted CFGs and weighted PDAs is described in Nederhof and Satta (2004) .",
"cite_spans": [
{
"start": 858,
"end": 895,
"text": "Abney, McAllester, and Pereira (1999)",
"ref_id": "BIBREF0"
},
{
"start": 1042,
"end": 1067,
"text": "Chelba and Jelinek, 1998)",
"ref_id": "BIBREF1"
},
{
"start": 1633,
"end": 1658,
"text": "Nederhof and Satta (2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Practical Implications",
"sec_num": "5."
},
{
"text": "MacKay (1996) proved that linear Boltzmann chains (a class of weighted models that is essentially the same as Moore MRFs) express the same set of distributions as Moore HMMs, under the condition that the Boltzmann chain has a single specific end state. MacKay avoided the divergence problem by defining the Boltzmann chain always to condition on the length of the sequence; he tacitly requires all of his models to be in G Z n <\u221e . We have suggested a more applicable notion of model equivalence (equivalence of the conditional distribution) and our Theorem 1 generalizes to context-free models.",
"cite_spans": [
{
"start": 7,
"end": 13,
"text": "(1996)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6."
},
{
"text": "We have shown that weighted CFGs that define finite scores for all sentences in their languages have no greater expressivity than PCFGs, when used to define distributions over trees given sentences. This implies that the standard Mealy MRF formalism is no more powerful than Mealy HMMs, for instance. We have also related \"maximum entropy Markov models\" to Mealy Markov random fields, showing that the former is a strictly less expressive weighted formalism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "Assigning a weight of zero to a rule equates to excluding it from R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Here, full rigor would require quantification of n, writing \"G \u2203nZ n (\u0398)=\u221e .\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We are grateful to an anonymous reviewer for pointing this out, and an even stronger point: for a given G, G and G Z n <\u221e have a nonempty set-difference if and only if G has infinite ambiguity (some x \u2208 L(G) has infinitely many parse trees).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We admit that these names are somewhat misleading, because as we will show, chain-structured MRFs also have the Markov property and define the same joint and conditional distributions as HMMs. 5 In formal language theory both Mealy and Moore machines are finite-state transducers(Mealy 1955;Moore 1956); we ignore the input symbols here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Of course, the right-linear grammar is a CFG, so we could also use the notation G PCFG , P PCFG , and C PCFG .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "What if we allow additional features? It can be shown that, as long as the vocabulary \u03a3 is finite and known, we can convert any such MRF with potential functions on state transitions and emissions into an HMM functioning equivalently as a classifier. If \u03a3 is not fully known, then we cannot sum over all emissions from each state, and we cannot use Chi's method (Section 2.1) to convert to a PCFG (HMM).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The HMM shown in Example 3 can be factored into a Moore HMM without any change to the distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by a Fannie and John Hertz Foundation fellowship to N. Smith at Johns Hopkins University. The views expressed are not necessarily endorsed by the sponsors. We are grateful to three anonymous reviewers for feedback that improved the article, to Michael Collins for encouraging exploration of this matter and helpful comments on a draft, and to Jason Eisner and Dan Klein for insightful conversations. Any errors are the sole responsibility of the authors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Applying probability measures to abstract languages",
"authors": [
{
"first": "Steven",
"middle": [
"P"
],
"last": "Abney",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Mcallester",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 1973,
"venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics",
"volume": "22",
"issue": "",
"pages": "442--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abney, Steven P., David A. McAllester, and Fernando Pereira. 1999. Relating probabilistic grammars and automata. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 542-549, College Park, MD. Booth, Taylor L. and Richard A. Thompson. 1973. Applying probability measures to abstract languages. IEEE Transactions on Computers, 22(5):442-450.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Exploiting syntactic structure for language modeling",
"authors": [
{
"first": "Ciprian",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "Frederick",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "325--331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chelba, Ciprian and Frederick Jelinek. 1998. Exploiting syntactic structure for language modeling. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, pages 325-331, Montreal, Canada.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Statistical properties of probabilistic context-free grammars",
"authors": [
{
"first": "Zhiyi",
"middle": [],
"last": "Chi",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "1",
"pages": "131--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi, Zhiyi. 1999. Statistical properties of probabilistic context-free grammars. Computational Linguistics, 25(1):131-160.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Concrete Mathematics",
"authors": [
{
"first": "Ronald",
"middle": [
"L"
],
"last": "Graham",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Donald",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Knuth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Patashnik",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham, Ronald L., Donald E. Knuth, and Oren Patashnik. 1994. Concrete Mathematics. Addison-Wesley, Reading, MA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Joint and conditional estimation of tagging and parsing models",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "314--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johnson, Mark. 2001. Joint and conditional estimation of tagging and parsing models. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics, pages 314-321, Toulouse, France.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Estimators for stochastic \"unification-based\" grammars",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Geman",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Canon",
"suffix": ""
},
{
"first": "Zhiyi",
"middle": [],
"last": "Chi",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johnson, Mark, Stuart Geman, Stephen Canon, Zhiyi Chi, and Stefan Riezler. 1999. Estimators for stochastic \"unification-based\" grammars.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Proceedings of the 37th Annual Conference of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "535--541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Proceedings of the 37th Annual Conference of the Association for Computational Linguistics, pages 535-541, College Park, MD.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "P",
"middle": [
"A"
],
"last": "Philadelphia",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klein, Dan and Christopher D. Manning. 2002. Conditional structure versus conditional estimation in NLP models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 9-16, Philadelphia, PA. Lafferty, John, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning, pages 282-289, Williamstown, MA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Equivalence of linear Boltzmann chains and hidden Markov models",
"authors": [
{
"first": "David",
"middle": [
"J C"
],
"last": "Mackay",
"suffix": ""
}
],
"year": 1996,
"venue": "Neural Computation",
"volume": "8",
"issue": "1",
"pages": "178--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MacKay, David J. C. 1996. Equivalence of linear Boltzmann chains and hidden Markov models. Neural Computation, 8(1):178-181.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Maximum entropy Markov models for information extraction and segmentation",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Dayne",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 17th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "591--598",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McCallum, Andrew, Dayne Freitag, and Fernando Pereira. 2000. Maximum entropy Markov models for information extraction and segmentation. In Proceedings of the 17th International Conference on Machine Learning, pages 591-598, Palo Alto, CA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A method for synthesizing sequential circuits",
"authors": [
{
"first": "G",
"middle": [
"H"
],
"last": "Mealy",
"suffix": ""
}
],
"year": 1955,
"venue": "Bell System Technology Journal",
"volume": "34",
"issue": "",
"pages": "1045--1079",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mealy, G. H. 1955. A method for synthesizing sequential circuits. Bell System Technology Journal, 34:1045-1079.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Gedankenexperiments on sequential machines",
"authors": [
{
"first": "Edward",
"middle": [
"F"
],
"last": "Moore",
"suffix": ""
}
],
"year": 1956,
"venue": "Automata Studies, number 34 in Annals of Mathematics Studies",
"volume": "",
"issue": "",
"pages": "129--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moore, Edward F. 1956. Gedanken- experiments on sequential machines. In Automata Studies, number 34 in Annals of Mathematics Studies. Princeton University Press, Princeton, NJ, pages 129-153.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Probabilistic parsing strategies",
"authors": [
{
"first": "Mark-Jan",
"middle": [],
"last": "Nederhof",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "543--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nederhof, Mark-Jan and Giorgio Satta. 2004. Probabilistic parsing strategies. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, pages 543-550, Barcelona, Spain.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A maximum entropy model for parsing",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "R. Todd",
"middle": [],
"last": "Ward",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "803--806",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ratnaparkhi, Adwait, Salim Roukos, and R. Todd Ward. 1994. A maximum entropy model for parsing. In Proceedings of the International Conference on Spoken Language Processing, pages 803-806, Yokohama, Japan.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Contrastive estimation: Training log-linear models on unlabeled data",
"authors": [
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smith, Noah A. and Jason Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An efficient probabilistic context-free parsing algorithm that computes prefix probabilities",
"authors": [],
"year": 1995,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics",
"volume": "21",
"issue": "",
"pages": "165--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 354-362, Ann Arbor, MI. Stolcke, Andreas. 1995. An efficient probabilistic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics, 21(2):165-201.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Max-margin parsing",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Koller",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taskar, Ben, Dan Klein, Michael Collins, Daphne Koller, and Christopher Manning. 2004. Max-margin parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1-8, Barcelona, Spain.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Computational Recreations in Mathematica",
"authors": [
{
"first": "Ilan",
"middle": [],
"last": "Vardi",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vardi, Ilan. 1991. Computational Recreations in Mathematica. Addison-Wesley, Redwood City, CA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Probabilistic languages: A review and some open questions",
"authors": [
{
"first": "C",
"middle": [
"S"
],
"last": "Wetherell",
"suffix": ""
}
],
"year": 1980,
"venue": "Computing Surveys",
"volume": "12",
"issue": "",
"pages": "361--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wetherell, C. S. 1980. Probabilistic languages: A review and some open questions. Computing Surveys, 12:361-379.",
"links": null
}
},
"ref_entries": {}
}
}