ACL-OCL / Base_JSON /prefixT /json /tacl /2020.tacl-1.27.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:56:28.641796Z"
},
"title": "Consistent Unsupervised Estimators for Anchored PCFGs",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Clark",
"suffix": "",
"affiliation": {},
"email": "alexsclark@gmail.com"
},
{
"first": "Nathana\u00ebl",
"middle": [],
"last": "Fijalkow",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS",
"location": {
"addrLine": "LaBRI, Bordeaux,",
"settlement": "London"
}
},
"email": "nathanael.fijalkow@labri.fr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Learning probabilistic context-free grammars (PCFGs) from strings is a classic problem in computational linguistics since Horning (1969). Here we present an algorithm based on distributional learning that is a consistent estimator for a large class of PCFGs that satisfy certain natural conditions including being anchored (Stratos et al., 2016). We proceed via a reparameterization of (top-down) PCFGs that we call a bottom-up weighted context-free grammar. We show that if the grammar is anchored and satisfies additional restrictions on its ambiguity, then the parameters can be directly related to distributional properties of the anchoring strings; we show the asymptotic correctness of a naive estimator and present some simulations using synthetic data that show that algorithms based on this approach have good finite sample behavior.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Learning probabilistic context-free grammars (PCFGs) from strings is a classic problem in computational linguistics since Horning (1969). Here we present an algorithm based on distributional learning that is a consistent estimator for a large class of PCFGs that satisfy certain natural conditions including being anchored (Stratos et al., 2016). We proceed via a reparameterization of (top-down) PCFGs that we call a bottom-up weighted context-free grammar. We show that if the grammar is anchored and satisfies additional restrictions on its ambiguity, then the parameters can be directly related to distributional properties of the anchoring strings; we show the asymptotic correctness of a naive estimator and present some simulations using synthetic data that show that algorithms based on this approach have good finite sample behavior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper presents an approach for strongly learning a linguistically interesting subclass of probabilistic context-free grammars (PCFGs) from strings in the realizable case. Unpacking this, we assume that we have some PCFG that we are interested in learning and that we have access only to a sample of strings generated by the PCFG (i.e., sampled from the distribution defined by the context-free grammar). Crucially, we do not observe the derivation trees-the hierarchical latent structure. Strong learning means that we want the learned grammar to define the same distribution over labeled trees as the original grammar and not just the same distribution over strings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Clearly, there can be many structurally different PCFGs that define the same distribution over strings. Consider for example the distribution that generates a single string of length 3 with prob-ability one and the various PCFGs that give rise to that same distribution; for these obvious reasons, that we discuss in more detail later, we cannot have an algorithm that does this for all PCFGs. Accordingly, we define some sufficient conditions on PCFGs for this algorithm to perform correctly. More precisely, we define some simple structural conditions on the underlying CFGs (in Section 3), and we will show that the resulting class of PCFGs is identifiable from strings, in the sense that any two PCFGs that define the same distribution over strings will be isomorphic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We then provide a computationally trivial learning algorithm in Section 4, together with a proof that it will strongly learn every grammar in this class. The algorithm is not intended to be a realistic algorithm, but merely to illustrate the fundamental correctness of this general approach. We then show that general PCFGs in Chomsky normal form (CNF) that approximate the observable properties of natural language syntax are efficiently learnable using some simulations with synthetic data in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our primary scientific motivation is to understand the process of first-language acquisition, in particular the early phases of the acquisition of syntactic structure. Importantly, the grammar is not just a decision procedure that classifies strings as being grammatical or ungrammatical, but additionally assigns a tree structure to the grammatical sentences, a structure the primary role of which is to support semantic interpretation. The standard view is that children learn the syntactic structure of their languages not by purely syntactic means, but rather by using information about the range of available interpretations, derived from the situational context of the sentences they hear and inferences about the intentions and goals of the speaker (e.g., Abend et al., 2017) . Indeed there is ample direct evidence from the developmental psycholinguistics literature that this does in fact happen at certain stages of language acquisition: For example, Gropen et al. (1991) showed that the acquisition of argument structure of verbs exploits semantic information about the verb and the arguments. However, the children in these experiments-the youngest cohort being nearly 4 years old-have already acquired a great deal of knowledge about English syntax.",
"cite_spans": [
{
"start": 763,
"end": 782,
"text": "Abend et al., 2017)",
"ref_id": null
},
{
"start": 961,
"end": 981,
"text": "Gropen et al. (1991)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here, we are exploring an alternative or perhaps complementary hypothesis: namely, that the acquisition of the syntactic categories and rules of the language can to a certain extent be learned using only information derived from the surface strings without any appeal to external information about the hierarchical structure of the language that is being learned. In other words, the initial phases of language acquisition are based on purely syntactic information rather than the semantic bootstrapping discussed above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contributions of this paper are as follows. First, we provide a reparameterization of PCFGs within the space of weighted context-free grammars (WCFGs) that we call Bottom-up WCFGs. Next, we define three structural conditions on CFGs and show that they imply the identifiability of the class of all PCFGs based on those grammars. We then present a naive computationally trivial estimator and prove its asymptotic consistency for that class of PCFGs. We present some experiments on synthetic grammars that show that a variant of this algorithm has good finite sample behavior. Finally, we examine the extent to which these conditions are plausible, using a corpus of child-directed speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We assume we have a finite set of atomic symbols \u03a3. The set of finite strings over this set is written \u03a3 * , nonempty finite strings are denoted by \u03a3 + , and the empty string is \u03bb. We will typically write a, b, c, . . . for elements of \u03a3 and u, v, w, . . . for elements of \u03a3 * . A (formal) language L is a subset of \u03a3 * . A context is an ordered pair of strings, that is, an element of \u03a3 * \u00d7 \u03a3 * that we write as l, r. If U, V are languages, then their concatenation is U V defined in the normal way, and we will also write uV where u is a string instead of {u}V and so on. Given a fixed language L, we define for a set of strings U a set of contexts U \u22b2 as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2"
},
{
"text": "U \u22b2 = {l, r | lU r \u2286 L}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2"
},
{
"text": "If U = {u} we will write u \u22b2 for the distribution of u-the set of contexts in which it can occur.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2"
},
{
"text": "A stochastic language is a function P from \u03a3 * \u2192 [0, 1], such that w\u2208\u03a3 * P(w) = 1. Note that the support of this distribution is a formal language as defined above. We assume for the rest of the paper that the expected length of strings drawn from this distribution is finite.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2"
},
{
"text": "We can define for some u \u2208 \u03a3 + , the expected number of times that u will occur as a substring in a string distributed according to P. 1 E(u) = l,r\u2208\u03a3 * \u00d7\u03a3 *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2"
},
{
"text": "We can also define, for a string u, its context distribution, which is a probability distribution over its contexts written D(u), whose support will be u \u22b2 , given for l, r \u2208 \u03a3 * \u00d7 \u03a3 * by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P(lur)",
"sec_num": null
},
{
"text": "D(u)[l, r] = P(lur) E(u) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P(lur)",
"sec_num": null
},
{
"text": "We consider context-free grammars (CFGs) in Chomsky normal form \u03a3, V, S, P where \u03a3 is a nonempty finite set of terminal symbols; V is a nonempty finite set, disjoint from \u03a3 of nonterminal symbols, S is a distinguished element of V , the start symbol and P is a finite nonempty set of productions each of which is either of the form A \u2192 a where A \u2208 V and a \u2208 \u03a3 or A \u2192 BC where A \u2208 V and B, C \u2208 V \\ {S}. 2 We write A, B, C, . . . for elements of V and \u03b1 for strings over V \u222a \u03a3. A derivation tree \u03c4 is a singly rooted ordered tree where every node is labeled with an element of V \u222a \u03a3 and each local tree is in P . The yield of a derivation is the string of symbols of leaves of the tree taken left to right; we write this as y(\u03c4 ). The set of all derivations licensed by G and rooted by a nonterminal A, and with a yield in a set \u0393 is written as \u2126(G, A, \u0393); here we follow the notation of Smith and Johnson (2007) among others. We will omit G when it is clear.",
"cite_spans": [
{
"start": 402,
"end": 403,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context-Free Grammars",
"sec_num": null
},
{
"text": "We want to be able to combine trees using tree substitution; thus, if we have a tree \u03c4 1 whose yield is lBr, where l and r are strings over \u03a3, and a tree \u03c4 2 whose root is B and whose yield is \u03b1, we can combine them to get a tree \u03c4 1 \u2297 \u03c4 2 whose yield is l\u03b1r.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-Free Grammars",
"sec_num": null
},
{
"text": "We define the string language defined by a nonterminal A to be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-Free Grammars",
"sec_num": null
},
{
"text": "L(G, A) = y(\u03c4 ) : \u03c4 \u2208 \u2126(G, A, \u03a3 + ) . The string language defined by a CFG G is L(G) = L(G, S).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-Free Grammars",
"sec_num": null
},
{
"text": "For a tree \u03c4 and a production A \u2192 \u03b1 we write f (A \u2192 \u03b1; \u03c4 ) for the number of times the production occurs in t. We write |\u03c4 | for the number of nonterminal symbols in a tree, and |w| for the length of a string.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-Free Grammars",
"sec_num": null
},
{
"text": "We will now consider the probabilistic case where we have a (discrete) probability distribution over trees, that is, over \u2126(G, S, \u03a3 + ), which will then define a stochastic language, whose support will be a context-free language. We will only consider those distributions which satisfy some simple conditional independence assumptions and can be represented by weighted CFGs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCFGs",
"sec_num": "2.1"
},
{
"text": "A weighted CFG (WCFG) is a CFG together with a parameter function \u03b8 : P \u2192 R that maps productions to nonnegative real values; we will write this as G; \u03b8. The weight or score of a tree \u03c4 is the product of the weights of each production.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCFGs",
"sec_num": "2.1"
},
{
"text": "Formally s : \u2126(G) \u2192 R is defined as s(\u03c4 ; \u03b8) = A\u2192\u03b1\u2208P \u03b8(A \u2192 \u03b1) f (A\u2192\u03b1;\u03c4 ) Note that s(\u03c4 1 \u2297 \u03c4 2 ) = s(\u03c4 1 )s(\u03c4 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCFGs",
"sec_num": "2.1"
},
{
"text": "In general we will define the score of a set of trees \u2126 to be the sum of the scores of the trees in that set: s(\u2126) = \u03c4 \u2208\u2126 s(\u03c4 ). The weight of a string w is the sum of the weights of each derivation tree which yields w; s(w) = s(\u2126(G, S, w)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCFGs",
"sec_num": "2.1"
},
{
"text": "Definition 2.1. The inside value of a nonterminal A, written I(A) is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCFGs",
"sec_num": "2.1"
},
{
"text": "I(A) = s(\u2126(G, A, \u03a3 + ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCFGs",
"sec_num": "2.1"
},
{
"text": "Note that this quantity is sometimes called the partition function, written Z(A). The outside value, O(A), is defined likewise as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCFGs",
"sec_num": "2.1"
},
{
"text": "O(A) = s(\u2126(G, S, \u03a3 * A\u03a3 * ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCFGs",
"sec_num": "2.1"
},
{
"text": "Note that O(S) = 1 by definition, since \u2126(G, S, \u03a3 * S\u03a3 * ) is a single element set consisting of the trivial tree with one node S, which has score 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCFGs",
"sec_num": "2.1"
},
{
"text": "A WCFG is globally normalized if I(S) = 1. In this case it defines a probability distribution over trees, we can identify the probability of a tree with its score: P(\u03c4 ) = s(\u03c4 ), and via that a stochastic language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WCFGs",
"sec_num": "2.1"
},
{
"text": "We define expectations of nonterminals, terminals, and productions, with respect to the distribution over trees defined by a globally normalized WCFGs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectations",
"sec_num": "2.2"
},
{
"text": "Given a globally normalized WCFG, the quantity E(A \u2192 \u03b1) is the expected number of times the production A \u2192 \u03b1 occurs in a tree generated by the distribution induced by the grammar:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectations",
"sec_num": "2.2"
},
{
"text": "E(A \u2192 \u03b1) = \u03c4 \u2208\u2126(G,S,\u03a3 + ) s(\u03c4 )f (A \u2192 \u03b1; \u03c4 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectations",
"sec_num": "2.2"
},
{
"text": "Using this we define the expectation of a nonterminal:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectations",
"sec_num": "2.2"
},
{
"text": "E(A) = \u03b1:A\u2192\u03b1\u2208P E(A \u2192 \u03b1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectations",
"sec_num": "2.2"
},
{
"text": "Note that E(S) = 1 (because it can only occur at the root of every tree).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectations",
"sec_num": "2.2"
},
{
"text": "For nonterminals A, B, C and terminals a, the following identities relate the expectations and the inside and outside values, which can be established using the methods of, for example, Chi (1999) .",
"cite_spans": [
{
"start": 186,
"end": 196,
"text": "Chi (1999)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Expectations",
"sec_num": "2.2"
},
{
"text": "E(A) = I(A)O(A) (1) E(A \u2192 a) = O(A)\u03b8(A \u2192 a) E(A \u2192 BC) = O(A)\u03b8(A \u2192 BC)I(B)I(C)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectations",
"sec_num": "2.2"
},
{
"text": "Note that for any nonterminal A that is not S, and any \u03b2 > 0, we can scale all parameters for productions with A on the left-hand side by \u03b2, and every production with A on the right-hand side by \u03b2 \u22121 (or \u03b2 \u22122 if A occurs twice on the right-hand side), and the score of every tree will remain the same. There are two natural ways of resolving this arbitrariness: one is to stipulate that for all nonterminals I(A) = 1, which gives us the familiar PCFG. The parameters of a tight PCFG satisfy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectations",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8(A \u2192 \u03b1) = E(A \u2192 \u03b1) E(A) .",
"eq_num": "(2)"
}
],
"section": "Expectations",
"sec_num": "2.2"
},
{
"text": "The learning approach we take here is based on modeling the context distribution, and it is therefore more mathematically convenient to use the second normalization method where we stipulate that O(A) = 1 for all nonterminals. We now define this alternative parameterization, which we call a bottom-up WCFG, in contrast to the topdown generative process associated with a PCFG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectations",
"sec_num": "2.2"
},
{
"text": "Definition 2.2 (bottom-up WCFG). We say that a WCFG is in bottom-up form if I(S) = 1, and for all nonterminals A, O(A) = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectations",
"sec_num": "2.2"
},
{
"text": "If a WCFG is in bottom-up form then the parameters satisfy:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectations",
"sec_num": "2.2"
},
{
"text": "\u03b8(A \u2192 BC) = E(A \u2192 BC) E(B)E(C) (3) \u03b8(A \u2192 a) = E(A \u2192 a).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectations",
"sec_num": "2.2"
},
{
"text": "Note that in this form, we condition the parameters on the right-hand side of the production not on the left-hand side as is done with a PCFG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expectations",
"sec_num": "2.2"
},
{
"text": "There is a unique bijection between the class of tight PCFGs and bottom-up WCFGs; we can easily convert from one form to the other. We can efficiently compute the inside and outside values of a convergent WCFG using standard techniques (Hutchins, 1972; Nederhof and Satta, 2008; Etessami et al., 2012) ; these involve solving a system of quadratic equations (since the grammar is in Chomsky normal form) in the case of the inside values, which can be done using the Newton method or a fixed point iteration, and a linear system in the case of the outside values. The expectations of each production can then be computed using Equation 1 and then converted into a PCFG or bottom up WCFG as desired using Equations 2 and 3, respectively.",
"cite_spans": [
{
"start": 236,
"end": 252,
"text": "(Hutchins, 1972;",
"ref_id": "BIBREF14"
},
{
"start": 253,
"end": 278,
"text": "Nederhof and Satta, 2008;",
"ref_id": "BIBREF19"
},
{
"start": 279,
"end": 301,
"text": "Etessami et al., 2012)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Expectations",
"sec_num": "2.2"
},
{
"text": "We assume that we have a sequence of strings generated independently and identically distributed (i.i.d.) from some distribution generated by an unknown PCFG or WCFG, which we call the target grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifiability",
"sec_num": "3"
},
{
"text": "We are interested in the problem of producing a PCFG from this input data that is close to the target PCFG; namely, the underlying CFG is isomorphic to the underlying CFG of the target grammar and additionally the parameters are within \u01eb of the corresponding parameters of the target grammar: we call this being \u01eb-close. Two CFGs are isomorphic if they are identical apart from the labels of the nonterminals; the isomorphism is just a bijection between the nonterminals and productions in the natural way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifiability",
"sec_num": "3"
},
{
"text": "Definition 3.1. Two WCFGs, G; \u03b8 and G \u2032 ; \u03b8 \u2032 , are \u01eb-close if there is an CFG-isomorphism \u03c6 from G to G \u2032 such that for all A \u2192 \u03b1 in the grammars,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifiability",
"sec_num": "3"
},
{
"text": "|\u03b8(A \u2192 \u03b1) \u2212 \u03b8 \u2032 (\u03c6(A \u2192 \u03b1))| < \u01eb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifiability",
"sec_num": "3"
},
{
"text": "More precisely, we say that a learning algorithm A is a consistent estimator for a class of globally normalized WCFGs, G, if for every WCFG, G * , \u03b8 * in the class, for every \u01eb, \u03b4 > 0, there is an N such that if the algorithm receives a sample of m strings, sampled i.i.d. where m \u2265 N then it outputs a WCFG\u011c,\u03b8 such that with probability at least 1 \u2212 \u03b4 we have that\u011c,\u03b8 is \u01eb-close to G * , \u03b8 * .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifiability",
"sec_num": "3"
},
{
"text": "We now define three structural conditions on PCFGs that will be sufficient to guarantee identifiability of the class from strings. Condition 3.1. A grammar G is anchored if for every nonterminal A, there exists a terminal a such that A \u2192 a \u2208 P and, if B \u2192 a \u2208 P then B = A. In other words a occurs on the right-hand side of exactly one production.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Conditions on Grammars",
"sec_num": "3.1"
},
{
"text": "We will call such a terminal a characterizing terminal of A, and if a characterizes A we will sometimes write",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Conditions on Grammars",
"sec_num": "3.1"
},
{
"text": "[[a]] for A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Conditions on Grammars",
"sec_num": "3.1"
},
{
"text": "This condition is very close to a number of conditions that have been proposed in the literature both for topic modeling and for grammatical inference: We use here the terminology of Stratos et al. (2016) , but similar ideas occur in, for example, Adriaans's (1999) approach to learning CFGs and Denis et al.'s (2004) approach to learning regular languages. This is also very closely related to what is called the 1-Finite Kernel Property in distributional learning of CFGs (Clark and Yoshinaka, 2016) .",
"cite_spans": [
{
"start": 183,
"end": 204,
"text": "Stratos et al. (2016)",
"ref_id": "BIBREF28"
},
{
"start": 248,
"end": 265,
"text": "Adriaans's (1999)",
"ref_id": "BIBREF1"
},
{
"start": 296,
"end": 317,
"text": "Denis et al.'s (2004)",
"ref_id": null
},
{
"start": 474,
"end": 501,
"text": "(Clark and Yoshinaka, 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Conditions on Grammars",
"sec_num": "3.1"
},
{
"text": "The key idea behind the learning algorithm is this: If every nonterminal has a characterizing terminal then we can infer the probabilities of the productions of the grammar from distributional properties of the strings of corresponding terminals. Thus if A, B, and C are nonterminals characterized by a, b, and c, respectively, then we can infer something about the parameter of the production A \u2192 BC by looking at the distributional properties of a and bc. And if A is a nonterminal characterized by a and b is any terminal, then we can infer something about the parameter of the production A \u2192 b by looking at the distributional properties of a and b.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Conditions on Grammars",
"sec_num": "3.1"
},
{
"text": "We start by defining some quantities that depend only on a distribution over strings. Recall that the R\u00e9nyi \u03b1-divergence (R\u00e9nyi, 1961) between two discrete distributions P and Q is defined for \u03b1 = \u221e",
"cite_spans": [
{
"start": 121,
"end": 134,
"text": "(R\u00e9nyi, 1961)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R \u221e (P Q) = log sup x P (x) Q(x)",
"eq_num": "(4)"
}
],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "Given two strings u, v we will be concerned",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "with \u03c1(u \u2192 v), \u03c1(u \u2192 v) = R \u221e (D(u) D(v))",
"eq_num": "(5)"
}
],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "This is an asymmetric nonnegative measure of ''distance'' between the context distributions of u and v, which takes the value 0 only when they are identical. Note that, because u \u22b2 is the support of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "D(u), e \u2212\u03c1(u\u2192v) = E(u) E(v) inf l,r\u2208u \u22b2 P(lvr) P(lur)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "We can now state a foundational result, which relates the parameters of a production to these divergences. We will start by proving an inequality, that we will later strengthen to an equality under additional conditions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "Theorem 3.1. Suppose G; \u03b8 is a bottom-up WCFG, and G is anchored. Let D be the distribution it defines, and P the set of productions. Suppose that a, b, c are characterizing terminals for nonterminals A, B, C respectively. Then for any terminal a\u2192bc) Proof. Suppose A is a nonterminal in G that is characterized by a. Then, for every context l, r, since the only way that we can derive an a is via A, P(lar) = s(\u2126(S, lAr))\u03b8(A \u2192 a). Summing both sides with respect to l, r we obtain",
"cite_spans": [
{
"start": 245,
"end": 250,
"text": "a\u2192bc)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "d if A \u2192 d \u2208 P \u03b8(A \u2192 d) \u2264 E(d)e \u2212\u03c1(a\u2192d) and if A \u2192 BC \u2208 P \u03b8(A \u2192 BC) \u2264 E(bc) E(b)E(c) e \u2212\u03c1(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "E(a) = O(A)\u03b8(A \u2192 a)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "Since O(A) = 1 in a bottom-up WCFG we have that \u03b8(A \u2192 a) = E(a)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "and therefore",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s(\u2126(S, lAr)) = P(lar) E(a)",
"eq_num": "(7)"
}
],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "Now consider lexical rules. Consider some production A \u2192 d in the grammar, where a characterizes A. Consider some l, r \u2208 a \u22b2 . Since a is an anchor of A, we know that s(\u2126(S, lAr)) > 0, and therefore P(ldr) > 0. Clearly",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P(ldr) \u2265 s(\u2126(S, lAr))\u03b8(A \u2192 d)",
"eq_num": "(8)"
}
],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "since the probability on the left-hand side is a sum over the scores of many possible derivations, and the right-hand side is a sum over a subset of those derivations. Therefore:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "\u03b8(A \u2192 d) \u2264 P(ldr) s(\u2126(S, lAr))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "Now using Equation 7, we obtain",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "\u03b8(A \u2192 d) \u2264 E(a) P(ldr) P(lar)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "Because this is true for all l, r \u2208 a \u22b2 we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "\u03b8(A \u2192 d) E(d) \u2264 E(a) E(d) inf l,r\u2208a \u22b2 P(ldr) P(lar) = e \u2212\u03c1(a\u2192d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "The same argument goes through for the binary rules. Suppose we have A, B, C nonterminals characterized by a, b, c, respectively, and a production A \u2192 BC with parameter \u03b8(A \u2192 BC). Let l, r be some context in a \u22b2 , then P(lar) > 0 and P(lbcr) > 0. Clearly",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P(lbcr) \u2265 s(\u2126(S, lAr))\u03b8(A \u2192 BC)\u03b8(B \u2192 b)\u03b8(C \u2192 c)",
"eq_num": "(9)"
}
],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "Therefore \u03b8(A \u2192 BC) is smaller than or equal to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "P(lbcr) s(\u2126(S, lAr))\u03b8(B \u2192 b)\u03b8(C \u2192 c)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "Using Equation 6 twice, and Equation 7 we get",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "\u03b8(A \u2192 BC) \u2264 E(bc) E(b)E(c) E(a) E(bc) P(lbcr) P(lar)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "Again, because this is true for all l, r \u2208 a \u22b2 we have a\u2192bc) This shows us that we have an upper bound on the parameters from a distributional property. But looking at Equations 8 and 9, we can consider the circumstances under which this inequality will be tight, in which case we can recover the parameters directly.",
"cite_spans": [
{
"start": 55,
"end": 60,
"text": "a\u2192bc)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "\u03b8(A \u2192 BC) \u2264 E(bc) E(b)E(c) e \u2212\u03c1(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "In particular, if the grammar is unambiguous (i.e., if every string has at most one derivation tree) then if the left-hand side of the inequality is nonzero we can immediately see that the inequality will become an equality. As it happens, there will also be equality under some much weaker conditions that we now define.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Divergences",
"sec_num": "3.2"
},
{
"text": "We now define two closely related conditions that are both related to the degree of ambiguity of the grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "Condition 3.2. Suppose a CFG G contains a production A \u2192 \u03b1. We say that G has an unambiguous context for that production if there is a string w and strings l, u, r such that w = lur, \u2126(G, S, w) is nonempty and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "\u2126(G, S, w) = \u2126(G, S, lAr) \u2297 \u2126(G, A, u)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "and all elements of \u2126(G, A, u) have an occurrence of A \u2192 \u03b1 at the root. A CFG is locally unambiguous if it has an unambiguous context for every production in its set of productions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "Informally this condition says that for every production there is some string which, although it can be ambiguous, always uses that production at the same point. Note that if G is locally unambiguous and is anchored, then for every binary production, [[a] ",
"cite_spans": [
{
"start": 251,
"end": 255,
"text": "[[a]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "] \u2192 [[b]][[c]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "] there will be a context l, r such that lbcr satisfies the condition; and for every production [[a] ] \u2192 b there will be a context l, r such that lbr satisfies the condition.",
"cite_spans": [
{
"start": 96,
"end": 100,
"text": "[[a]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "If a grammar is unambiguous, then every context is an unambiguous context for every derivation that uses it, but this condition is much weaker than that; indeed, we don't need there to be any unambiguous strings, since \u2126(G, S, lAr) can have more than one element.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "Lemma 3.1. If G; \u03b8 is a bottom-up WCFG and G is anchored and is locally unambiguous, then if a\u2192b) and if [[a] a\u2192b,c) Proof \u2297 \u2126(A, bc). Now we apply the same manipulations to get that for this l, r a\u2192bc) .",
"cite_spans": [
{
"start": 93,
"end": 97,
"text": "a\u2192b)",
"ref_id": null
},
{
"start": 105,
"end": 109,
"text": "[[a]",
"ref_id": null
},
{
"start": 110,
"end": 116,
"text": "a\u2192b,c)",
"ref_id": null
},
{
"start": 197,
"end": 202,
"text": "a\u2192bc)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "[[a]] \u2192 b \u2208 P \u03b8([[a]] \u2192 b) = E(b)e \u2212\u03c1(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "] \u2192 [[b]][[c]] \u2208 P \u03b8([[a]] \u2192 [[b]][[c]]) = E(bc) E(b)E(c) e \u2212\u03c1(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "\u03b8([[a]] \u2192 [[b]][[c]]) = E(a) E(b)E(c) P(lbcr) P(lar) and therefore \u03b8([[a]] \u2192 [[b]][[c]]) = E(bc) E(b)E(c) e \u2212\u03c1(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "The argument for lexical rules is analogous.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "We can understand this better by taking the log.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log \u03b8([[a]] \u2192 [[b]][[c]]) = log E(bc) E(b)E(c) \u2212 \u03c1(a \u2192 bc)",
"eq_num": "(10)"
}
],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "The natural parameter is then the sum of two terms: The first is just the pointwise mutual information (Church and Hanks, 1990 ) between b and c. 3 The second term penalizes cases where the right-hand side is distributionally dissimilar from the left-hand side. For the lexical productions, similarly we have two terms:",
"cite_spans": [
{
"start": 103,
"end": 126,
"text": "(Church and Hanks, 1990",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "log \u03b8([[a]] \u2192 b) = log E(b) \u2212 \u03c1(a \u2192 b) (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "3.3"
},
{
"text": "We need one more condition, however. There may be many different grammars that define the same distribution over strings that satisfy these two conditions because we may have multiple nonterminals that could be merged together.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Upward Monotonicity",
"sec_num": "3.4"
},
{
"text": "G = \u03a3, V, S, P is strictly upward monotonic if for all Q \u2283 P , L( \u03a3, V, S, Q ) \u2283 L(G). (Where Q is restricted to CNF productions of V \u00d7 (\u03a3 \u222a V 2 ).)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Condition 3.3. A grammar",
"sec_num": null
},
{
"text": "Informally, if we add a new production to the grammar, then the language defined increases. Note that of course all grammars have the property that if Q \u2287 P , then L( \u03a3, V, S, Q \u2287 L(G). Here we require this monotonicity to be strict.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Condition 3.3. A grammar",
"sec_num": null
},
{
"text": "We define the set of derivation contexts of a nonterminal A to be C(G, A) = {l, r : \u2126(G, S, lAr) = \u2205} . Lemma 3.2. Suppose G is anchored and upward monotonic: If A, B are nonterminals and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Condition 3.3. A grammar",
"sec_num": null
},
{
"text": "C(G, A) = C(G, B) then A = B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Condition 3.3. A grammar",
"sec_num": null
},
{
"text": "Proof. Let a be an anchor for A; we can clearly add the production B \u2192 a without increasing the language generated. Therefore, B \u2192 a is in the grammar, and so A = B as a is an anchor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Condition 3.3. A grammar",
"sec_num": null
},
{
"text": "Lemma 3.3. Suppose G is anchored and upward monotonic: Then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Condition 3.3. A grammar",
"sec_num": null
},
{
"text": "[[a]] \u2192 b \u2208 P iff a \u22b2 \u2286 b \u22b2 and [[a]] \u2192 [[b]][[c]] \u2208 P iff a \u22b2 \u2286 (bc) \u22b2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Condition 3.3. A grammar",
"sec_num": null
},
{
"text": "Using the same condition we can show that productions not in the grammar will have parameters zero, because of an infinite divergence term. Lemma 3.4. Suppose G is anchored, and upward monotonic, then Proof. If A \u2192 b is not in the grammar, then by Lemma 3.3, there is some l, r such that lar is in the language but lbr is not in the language and so \u03c1(a \u2192 b) = \u221e. Similarly for binary rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Condition 3.3. A grammar",
"sec_num": null
},
{
"text": "The preceding discussion shows that if we have a set of terminals that are anchors for the true nonterminals in the original grammar, then the productions and the (bottom-up) parameters of the associated productions will be fixed correctly, but it says nothing about parameters that might be associated to productions that use other nonterminals. However, it is easy to show that under these assumptions there can be no other nonterminals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Nonterminals",
"sec_num": "3.5"
},
{
"text": "Lemma 3.5. Suppose G 1 and G 2 are anchored and strictly monotonic, and are weakly equivalent. Then they are isomorphic, and there is a unique isomorphism between them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Nonterminals",
"sec_num": "3.5"
},
{
"text": "Proof. Let A be a nonterminal in G 1 , and let a be an anchor for A. Suppose B \u2192 a be some production in G 2 . Let b be an anchor for B. Therefore a \u22b2 \u2287 b \u22b2 . By a similar argument there must be a nonterminal C in G 1 and a terminal c that anchors C such that b \u22b2 \u2287 c \u22b2 . But because a \u22b2 \u2287 c \u22b2 , we must have a production C \u2192 a in G 1 . Since a is an anchor C = A, and therefore",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Nonterminals",
"sec_num": "3.5"
},
{
"text": "a \u22b2 = b \u22b2 = c \u22b2 . Therefore C(G 1 , A) = C(G 2 , B).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Nonterminals",
"sec_num": "3.5"
},
{
"text": "Let \u03c6 then be the CFG-morphism from",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Nonterminals",
"sec_num": "3.5"
},
{
"text": "G 1 \u2192 G 2 , defined by \u03c6(A) = A \u2032 iff C(G 1 , A) = C(G 2 , A \u2032 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Nonterminals",
"sec_num": "3.5"
},
{
"text": ". This is well defined by Lemma 3.2, and is clearly a bijection. Given this bijection, by Lemma 3.3, they will have the same set of productions, and thus be isomorphic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Nonterminals",
"sec_num": "3.5"
},
{
"text": "We can now define the classes of grammars that we are interested in. Let G A be the set of all trim CFGs that are in Chomsky normal form, anchored (Condition 3.1), are locally unambiguous (Condition 3.2), and are strictly upward monotonic (Condition 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifiability",
"sec_num": "3.6"
},
{
"text": "Let P A be the set of all tight PCFGs with finite expectations, with CFGs in G A , and let W A be the set of all WCFGs in bottom-up form with CFGs in G A . Theorem 3.2. Suppose G 1 ; \u03b8 1 and G 2 ; \u03b8 2 are in W A and are stochastically equivalent: In other words, for all w \u2208 \u03a3 + , P(w; G 1 ) = P(w; G 2 ), then G 1 is isomorphic to G 2 , and if \u03c6 is the unique such morphism, for all A \u2192 \u03b1, \u03b8 1 (A \u2192 \u03b1) = \u03b8 2 (\u03c6(A \u2192 \u03b1)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifiability",
"sec_num": "3.6"
},
{
"text": "Proof. Because they are stochastically equivalent, the support of their distributions is equal, and thus G 1 and G 2 are weakly equivalent. Therefore by Lemma 3.5 there is a unique isomorphism between them, \u03c6. By Lemma 3.1 the parameters of corresponding productions must also be equal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifiability",
"sec_num": "3.6"
},
{
"text": "Because there is a bijection between W A and P A , P A is also identifiable from strings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifiability",
"sec_num": "3.6"
},
{
"text": "We now analyze the properties of a particular estimator that we call the naive plugin estimator, which we will show can learn all grammars in W A and P A . This approach uses a trivial manner of estimating the \u03c1 values, and from this we derive a consistent estimator for the class. This approach has poor sample complexity but is algorithmically trivial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "We will need to estimate the \u03c1 divergences from a sample of strings drawn i.i.d. from the distribution defined by the grammar. Given a sample of strings, the most naive approach is to estimate P(w) and E(a) by the empirical distribution, to estimate the ratio as the ratio of these estimates, and to take the supremum over the frequent contexts of a rather than over the infinite set a \u22b2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "We are interested in convergence in probability, which we will write asX N N \u2192\u221e \u2212\u2212\u2212\u2192 X; in other words, for any \u01eb, \u03b4 > 0, there is an n such that for all N > n, with probability greater than 1 \u2212 \u03b4 we have |X N \u2212 X| < \u01eb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "Let w 1 , . . . , w N be the sample of N strings drawn i.i.d from a target PCFG, and let n(w) be the number of times that w occurs in the sample (as a whole string), and let m(w) be the number of times substring occurs as a substring; clearly, l,r n(lwr) = m(w). Defin\u00ea P(w) = n(w)/N to be the empirical probability of w and\u00ca(u) = m(w)/N to be the empirical expectation of u. Clearly, for any string w we hav\u00ea",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "P(w) N \u2192\u221e \u2212\u2212\u2212\u2192 P(w) and\u00ca(w) N \u2192\u221e \u2212\u2212\u2212\u2192 E(w).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "The naive plugin estimator is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Definition 4.1. For a, b, c \u2208 \u03a3 we defin\u00ea \u03c1 N (a \u2192 bc) = log\u00ca (bc) E(a) max l,r:n(lar)> \u221a N n(lar) n(lbcr) (12) And for a, b \u2208 \u03a3 we defin\u00ea \u03c1 N (a \u2192 b) = log\u00ca (b) E(a) max l,r:n(lar)> \u221a N n(lar) n(lbr)",
"eq_num": "(13)"
}
],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "Note that\u03c1 N (a \u2192 bc) = \u221e if there is some context l, r such that n(lar) > \u221a N , and n(lbcr) = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "We can show the convergence of the estimators when one side is anchored, starting with the case when the divergence is infinite.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "Lemma 4.1. For some G; \u03b8 \u2208 W A suppose that a is an anchor for a nonterminal A and suppose that for some b \u2208 \u03a3, \u03c1(a \u2192 b) = \u221e. Then for every \u03b4 > 0, there is an N such that with probability at least 1 \u2212 \u03b4,\u03c1 N (a \u2192 b) = \u221e. Similarly, if there is a c if \u03c1(c \u2192 a) = \u221e, there is an N such that with probability at least 1 \u2212 \u03b4,\u03c1 N (c \u2192 a) = \u221e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "Lemma 4.2. For some G; \u03b8 \u2208 W A suppose that a is an anchor for a nonterminal A, b for B, and c for C. If \u03c1(a \u2192 bc) = \u221e, then for every \u03b4 > 0, there is an N such that with probability at least 1 \u2212 \u03b4\u03c1 N (a \u2192 bc) = \u221e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "Proof. If A \u2192 BC were in P then \u03c1(a \u2192 bc) would be finite. So A \u2192 BC is not in P . By Condition 3.3, there must be some context l * , r * in a \u22b2 but not in (bc) \u22b2 , and so for sufficiently large N , l * ar * will occur more than \u221a N times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "Lemma 4.3. For some G; \u03b8 \u2208 W A suppose that a is an anchor for a nonterminal A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "Suppose \u03c1(a \u2192 b) is finite; then\u03c1 N (a \u2192 b) N \u2192\u221e \u2212\u2212\u2212\u2192 \u03c1(a \u2192 b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "Lemma 4.4. For some G; \u03b8 \u2208 W A suppose that a is an anchor for a nonterminal A, b for B, and c for C; if \u03c1(a \u2192 bc) is finite, then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "\u03c1 N (a \u2192 bc) N \u2192\u221e \u2212\u2212\u2212\u2192 \u03c1(a \u2192 bc).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "When \u03c1 is finite the convergence is straightforward since |{l, r : n(lar) > \u221a N }| \u2264 \u221a N and so we can use Chernoff bounds in a standard way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Naive Estimators",
"sec_num": "4"
},
{
"text": "We can now define the algorithm, taking as input a sequence of strings w 1 , . . . , w N and using the trivial plugin estimators\u03c1 N . The pseudocode is presented in Algorithm A. The algorithm starts by identifying the set of terminals that are anchors, which is illustrated in Figure 1 . If a terminal d is not an anchor then there will be some terminal a which is an anchor such that \u03c1(a \u2192 d) < \u221e and \u03c1(d \u2192 a) = \u221e; in other words, such that a \u22b2 \u2282 d \u22b2 . If the\u03c1 N estimates are infinite iff \u03c1 is infinite, then we can see that \u0393 will be the set of possible anchors; that is, those terminals that occur on the right-hand side of exactly one production. Clearly, if a and b are anchors for the same nonterminal then \u03c1(a \u2192 b) = \u03c1(b \u2192 a) = 0, and if they are anchors for different nonterminals then \u03c1(a \u2192 b) = \u03c1(b \u2192 a) = \u221e, so we can just group them into equivalence classes and pick the most frequent one from each class as the anchor. The start symbol will be anchored by the symbol that occurs most frequently as a whole sentence. We can now prove that this algorithm is a consistent estimator for the class of WCFGs that we consider, W A .",
"cite_spans": [],
"ref_spans": [
{
"start": 277,
"end": 285,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Definition of the Algorithm",
"sec_num": "4.1"
},
{
"text": "Theorem 4.1. For every grammar G * , \u03b8 * \u2208 W A , for every \u01eb, \u03b4 > 0, there is an n such that when Algorithm A is run on a sample of N strings, N > n, generated i.i.d. from G * ; \u03b8 * it produces a WCFG G; \u03b8 such that with probability at least",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of the Algorithm",
"sec_num": "4.1"
},
{
"text": "1 \u2212 \u03b4 \u2022 G * is CFG-isomorphic to G, and if \u03c6 is an isomorphism from G * to G \u2022 |\u03b8 * (A \u2192 \u03b1) \u2212 \u03b8(\u03c6(A \u2192 \u03b1))| < \u01eb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of the Algorithm",
"sec_num": "4.1"
},
{
"text": "Proof. (Sketch) Assume first that N is sufficiently large that\u03c1 N (a \u2192 b) is close to \u03c1(a \u2192 b) for all a, b such that either a or b is an anchor; we can then show that \u0393 in Line 2 is just the set of possible anchors; and a \u223c b will be true iff a, b are anchors for the same nonterminal. We define a bijection between the nonterminals of the hypothesis and the target. Line 5 picks the start symbol to be the unique anchor that can occur in a length 1 string. The grammar will have the right productions via Lemma 3.3, and the parameters will converge via Lemmas 4.3 and 4.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of the Algorithm",
"sec_num": "4.1"
},
{
"text": "The output of this is a WCFG that may be divergent: We therefore define Algorithm B that Input: A sequence of strings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of the Algorithm",
"sec_num": "4.1"
},
{
"text": "D = w 1 , w 2 , . . . , w N Output: A WCFG G; \u03b8 1 Compute\u03c1 N (a \u2192 b) for all a, b \u2208 \u03a3; 2 \u0393 \u2190 {a \u2208 \u03a3 | \u2200b \u2208 \u03a3,\u03c1 N (a \u2192 b) < \u221e \u2228\u03c1 N (b \u2192 a) = \u221e} ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of the Algorithm",
"sec_num": "4.1"
},
{
"text": "3 Define the equivalence relation on \u0393 given by a \u223c b iff\u03c1 N (a \u2192 b) < \u221e and \u03c1 N (b \u2192 a) < \u221e. Let \u2206 be the set formed by picking the terminal a with maximal m(a) from each equivalence class in \u0393/ \u223c ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of the Algorithm",
"sec_num": "4.1"
},
{
"text": "4 V \u2190 {[[a]] | a \u2208 \u2206}; 5 s \u2190 arg max{n(a) | a \u2208 \u2206} ; 6 P L \u2190 {[[a]] \u2192 b | a \u2208 \u2206, b \u2208 \u03a3,\u03c1 N (a \u2192 b) < \u221e} ; 7 Compute\u03c1 N (a \u2192 bc) for all a, b, c \u2208 \u2206 ; 8 P B \u2190 {[[a]] \u2192 [[b]][[c]] | a, b, c \u2208 \u2206,\u03c1 N (a \u2192 bc) < \u221e} ; 9 G \u2190 \u03a3, V, [[s]], P L \u222a P B ; 10 \u03b8([[a]] \u2192 b) \u2190 e \u2212\u03c1 N (a\u2192b)\u00ca (b) ; 11 \u03b8([[a]] \u2192 [[b]][[c]]) \u2190 e \u2212\u03c1 N (a\u2192bc)\u00ca (bc)/\u00ca(b)\u00ca(c) ; 12 return G; \u03b8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of the Algorithm",
"sec_num": "4.1"
},
{
"text": "Algorithm A: WCFG learner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of the Algorithm",
"sec_num": "4.1"
},
{
"text": "uses the inside outside (IO) algorithm (Eisner, 2016) to normalize the WCFG produced by Algorithm A; we take the output WCFG and run one iteration of the IO algorithm on the same data to estimate the expectations of all the rules that are then normalized to produce a PCFG. Proving the convergence of this estimator requires a little bit of care. Chi (1999) shows that the result of this procedure will always be a tight PCFG; the finite expectation of |\u03c4 | allows us to apply a variant of the dominated convergence theorem combined with the law of large numbers to show that this is a consistent estimator for the class of grammars P A .",
"cite_spans": [
{
"start": 39,
"end": 53,
"text": "(Eisner, 2016)",
"ref_id": "BIBREF8"
},
{
"start": 347,
"end": 357,
"text": "Chi (1999)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of the Algorithm",
"sec_num": "4.1"
},
{
"text": "The contributions of this paper are primarily theoretical but the reader may have legitimate concerns about the practicality of this approach given the naive estimator, the assumptions that are required, and the asymptotic nature of the correctness result. Here we present some computational simulations that address these issues, using synthetic PCFGs that mimic to a certain extent the observable properties of child-directed speech (Pearl and Sprouse, 2012) . We generate CFGs that have 10 nonterminals, 1,000 terminal symbols, and all possible rules in CNF; none of these grammars are in G A . To obtain a PCFG, we sample the parameters for the binary productions and an extra parameter for the lexical rules from a symmetric Dirichlet distribution with parameter \u03b1, which we vary to control the degree of ambiguity of the grammar. We then train these parameters using the IO algorithm to get a distribution of lengths close to a zero-truncated Poisson with parameter 5. We then sample the conditional lexical parameters from a multivariate log normal distribution with \u03c3 = 5. 4 To obtain a practical algorithm we follow Stratos et al. (2016) . We consider only the local context-the immediate preceding and following word including a distinguished sentence boundary marker-and use Ney-Essen clustering (Ney et al., 1994) with 20 clusters to get a low-dimensional feature space. We give the learning algorithm the true number of nonterminals as a hyperparameter (in contrast to Algorithm A, which learns the number of nonterminals) and run the NMF algorithm of Stratos et al. (2016) to find the anchors, considering only those that occur at least 1,000 times. We set the lexical parameters using the Frank-Wolfe algorithm, and the binary parameters using the Renyi divergence with \u03b1 = 5. To alleviate data sparsity with estimating the distribution of the anchor bigrams when computing the binary rule parameters, we use all bigrams consisting of words that have probability at least 0.9 of being derived from the respective nonterminal. This produces a WCFG (A) which may be divergent. We then run one iteration of the IO algorithm 5 to obtain a PCFG (B), and then a further 10 iterations to get another PCFG (C); this is guaranteed to increase the likelihood of the model; if the PCFG B is sufficiently close to the target then this will converge towards the global optimum, the ML estimate; if not it will only converge to a local optimum.",
"cite_spans": [
{
"start": 435,
"end": 460,
"text": "(Pearl and Sprouse, 2012)",
"ref_id": "BIBREF21"
},
{
"start": 1081,
"end": 1082,
"text": "4",
"ref_id": null
},
{
"start": 1125,
"end": 1146,
"text": "Stratos et al. (2016)",
"ref_id": "BIBREF28"
},
{
"start": 1307,
"end": 1325,
"text": "(Ney et al., 1994)",
"ref_id": "BIBREF20"
},
{
"start": 1565,
"end": 1586,
"text": "Stratos et al. (2016)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "For efficiency reasons we only run the IO algorithm on sentences of length at most 10; and we evaluate on lengths up to 20. The performance continues to improve with further iterations. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "After fixing the hyperparameters, we generate 100 different PCFGs for each condition, and sample 10 6 sentences from each. We evaluate the results according to how well they recover the true tree structures. We sample 1,000 trees from the target PCFG and evaluate the Viterbi parse of the yield of the tree using labeled exact match in Figure 2 and micro-averaged unlabeled precision/recall in Figure 3 . 6 In all cases we exclude all forced choices so it is possible to score zero. The performance of the original grammar is a measure of the ambiguity of the grammar.",
"cite_spans": [
{
"start": 405,
"end": 406,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 336,
"end": 344,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 394,
"end": 402,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1"
},
{
"text": "To see the effect of varying the degree of ambiguity, Figure 4 plots unlabeled exact match against the supervised baseline for values of \u03b1 \u2208 {0.01, 0.1, 1.0}. For \u03b1 = 1 both are close to the random baseline; apart from that extreme case we find the performance degrading smoothly as predicted by theory. The labeled exact match (not shown here) in contrast shows a more pronounced decrease.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 62,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1"
},
{
"text": "These grammars are about an order of magnitude smaller than plausible natural language grammars for child-directed speech as derived from the treebank in Pearl and Sprouse (2012) , but this is largely for resource limitations because whereas Algorithm A is very fast, the IO algorithm is computationally expensive, and running these experiments on hundreds of synthetic grammars/languages at a time would be prohibitively expensive. It is certainly computationally feasible to run these experiments on single grammars with up to 100 nonterminals and 20,000 terminals. In small-scale experiments the results appear comparable with those we report here. The major failure mode is when there are nonterminals A where a E(A \u2192 a) is very small. In those cases, though the grammar may be technically anchored, the anchors will be below the frequency threshold being considered. 7",
"cite_spans": [
{
"start": 154,
"end": 178,
"text": "Pearl and Sprouse (2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1"
},
{
"text": "An important question is whether this approach is directly applicable to natural language corpora either of transcribed child-directed speech or of 7 Full code for reproducing these experiments is available at https://github.com/alexc17/locallearner. text; a number of the assumptions we make are clearly false. First, even looking at English, we can see that the anchoring assumption is too strong. For example, the expletive pronouns in English, there and it, are both ambiguous, since there is also an adverb and it is also a personal pronoun, and so if there is a nonterminal representing such pronouns, then it will not be anchored.",
"cite_spans": [
{
"start": 148,
"end": 149,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Applicability to Natural Language Corpora",
"sec_num": "6"
},
{
"text": "When we consider phrasal categories, the question of whether such nonterminals are anchored requires asking two questions: first, whether such nonterminals generate single words at all, and secondly whether among those words we can find anchors. The existence of pro-forms, such as pronouns in the case of noun phrases, guarantees this for at least some categories. Clearly, this is genre-dependent, because it is sensitive to sentence length. Here we look at the Adam corpus of child-directed speech in English as syntactically annotated in the Penn treebank style by Pearl and Sprouse (2012) . Table 1 shows the results. We can see that nonclausal categories are mostly anchored at this crude level of analysis, but that clausal categories are not. This implies that simple sentences without embedded clauses can be learned using this approach, but that learning complex clausal structures will require this approach to be extended at least to anchors of length more than one.",
"cite_spans": [
{
"start": 569,
"end": 593,
"text": "Pearl and Sprouse (2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 596,
"end": 603,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Applicability to Natural Language Corpora",
"sec_num": "6"
},
{
"text": "Most fundamentally, simple PCFGs of the type that we consider here are very poor models of natural language syntax. In order to obtain reasonable results, such grammars need to be lexicalized because otherwise the independence assumptions of the PCFG are violated because of semantic relations, for example, between a verb and its subject. Thus the realizability assumption the approach relies on is dramatically false.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applicability to Natural Language Corpora",
"sec_num": "6"
},
{
"text": "There are two ways of thinking about PCFGs: one is as a nontrivial CFG with parameters attached, where the support of the distribution is the language generated by the CFG, and the other is where the CFG is trivial, containing all possible productions, and where the support is the set of all strings; we can call these sparse and dense PCFGs, respectively. Hsu et al. (2013) show that in the dense case the class of PCFGs is not identifiable without additional constraints, even when one can exclude a set of grammars of measure Table 1 : Phrasal categories from the corpus of child-directed speech in Pearl and Sprouse (2012) showing that the proportion of length 1 yields the best anchor with frequency at least 10 and the proportion of tokens of that word that occurs as a yield of that tag.",
"cite_spans": [
{
"start": 358,
"end": 375,
"text": "Hsu et al. (2013)",
"ref_id": "BIBREF13"
},
{
"start": 603,
"end": 627,
"text": "Pearl and Sprouse (2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 530,
"end": 537,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "7"
},
{
"text": "zero. 8 The class of sparse PCFGs we consider, P A , has measure zero in their framework, and thus there is no incompatibility between their result and Theorem 3.2. However, there is some incompatibility between the empirical results in Section 5 and Hsu et al. (2013)'s result. With the protocol used in Section 5 we are indeed trying to learn a nonidentifiable class because the PCFGs are dense. However, the grammars are approximately anchored in the sense that for each nonterminal A there is a terminal a such that E(A \u2192 a) is very close to E(a). In these cases, even though there are different parameter settings that give rise to the same distribution over strings, they will all be quite close to each other. There have been many different attempts to solve this problem over the decades since the learning problem was initially introduced by Horning (1969) ; a useful survey of older work on learning CFGs is contained in Lee (1996) . One strand of research looks at using the IO algorithm to train some heuristically initialized grammar (Baker, 1979; Lari and Young, 1990; Pereira and Schabes, 1992; de Marcken, 1999) . However, this 8 For technical reasons they consider only grammars where all probability mass is evenly distributed over all possible binary trees of a given length, and which are as a result highly ambiguous. approach is only guaranteed to converge to a local maximum of the likelihood, and does not work well in practice. A related problem that we do not discuss in this paper is learning when the labeled tree structures are observed-essentially that of estimating a PCFG from a treebank, a problem which is algorithmically trivial and statistically well behaved, as Cohen and Smith (2012) show. The approach we take is most closely related to the work by Stratos et al. (2016) and work on weakly learning CFGs from samples generated by PCFGs developed by Shibata and Yoshinaka (2016) . However, there are very few approaches to learning PCFGs with any nontrivial theoretical guarantees.",
"cite_spans": [
{
"start": 6,
"end": 7,
"text": "8",
"ref_id": null
},
{
"start": 851,
"end": 865,
"text": "Horning (1969)",
"ref_id": "BIBREF12"
},
{
"start": 931,
"end": 941,
"text": "Lee (1996)",
"ref_id": "BIBREF16"
},
{
"start": 1047,
"end": 1060,
"text": "(Baker, 1979;",
"ref_id": "BIBREF2"
},
{
"start": 1061,
"end": 1082,
"text": "Lari and Young, 1990;",
"ref_id": "BIBREF15"
},
{
"start": 1083,
"end": 1109,
"text": "Pereira and Schabes, 1992;",
"ref_id": "BIBREF22"
},
{
"start": 1110,
"end": 1127,
"text": "de Marcken, 1999)",
"ref_id": "BIBREF17"
},
{
"start": 1699,
"end": 1721,
"text": "Cohen and Smith (2012)",
"ref_id": "BIBREF6"
},
{
"start": 1788,
"end": 1809,
"text": "Stratos et al. (2016)",
"ref_id": "BIBREF28"
},
{
"start": 1888,
"end": 1916,
"text": "Shibata and Yoshinaka (2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "7"
},
{
"text": "The approach here is essentially an exemplarbased model: The syntactic categories are based on single strings of length 1. This can be naturally extended, mutatis mutandis, to sets of exemplars, and to exemplars with length greater than 1. The extension beyond CFGs to mildly context sensitive grammars such as MCFGs (Seki et al., 1991) seems to present some problems that do not occur in the nonprobabilistic case (Clark and Yoshinaka, 2016) ; although the same bounds on the bottom up parameters can be derived, identifying the set of anchors seems to be challenging.",
"cite_spans": [
{
"start": 317,
"end": 336,
"text": "(Seki et al., 1991)",
"ref_id": "BIBREF25"
},
{
"start": 415,
"end": 442,
"text": "(Clark and Yoshinaka, 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "7"
},
{
"text": "The variant of Algorithm A discussed in Section 5 is also interesting because it only uses local information in the initial phase: Indeed, it only uses the bigram and trigram counts, and it is only in the use of the IO algorithm that a pass through the data using the full sentence is used; this is compatible with psycholinguistic evidence about infants' abilities to track transitional probabilities (e.g., work following Saffran et al., 1996) . Of course the original version in Section 4 uses complete sentences and not just the low-order counts.",
"cite_spans": [
{
"start": 424,
"end": 445,
"text": "Saffran et al., 1996)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "7"
},
{
"text": "Note that Equation 10 provides some theoretical justification for the long literature (Harris, 1955; McCauley and Christiansen, 2019) on using mutual information as a heuristic for unsupervised chunking. Although it is intuitively reasonable that chunks should correspond to subsequences that have high pointwise mutual information, it is gratifying to finally have some mathematical basis for these intuitions.",
"cite_spans": [
{
"start": 86,
"end": 100,
"text": "(Harris, 1955;",
"ref_id": "BIBREF11"
},
{
"start": 101,
"end": 101,
"text": "",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "7"
},
{
"text": "This is the expectation because if u occurs n times in a string w, there will be n distinct contexts l, r such that lur = w.2 We follow the classical definition of Chomsky normal form in not allowing S to occur on the right-hand side of any rules. This simplifies various parts of the analysis, and makes the learning problem slightly harder, but it is not hard to remove this restriction if it is desired. Note that we do not allow an empty right-hand side of a production.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "With an adjustment of log E(|w|) because they are expectations and not probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This gives a Zipfian long-tailed distribution. We experimented also with a truncation of a Pitman Yor process with similar results.5 We are grateful to Mark Johnson for his efficient C implementation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Because both trees are binary, precision is equal to recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partially carried out while the first author was a visiting researcher at The Alan Turing 420 Institute. The second author was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1 and the DeLTA project (ANR-16-CE40-0007). We would like to thank the reviewers for helpful comments that have improved the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Learning shallow contextfree languages under simple distributions",
"authors": [
{
"first": "Pieter",
"middle": [],
"last": "Adriaans",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pieter Adriaans. 1999. Learning shallow context- free languages under simple distributions. Technical Report ILLC Report PP-1999-13, Institute for Logic, Language and Computation, Amsterdam.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Trainable grammars for speech recognition",
"authors": [
{
"first": "James",
"middle": [
"K"
],
"last": "Baker",
"suffix": ""
}
],
"year": 1979,
"venue": "Speech Communication Papers for the 97th Meeting of the Acoustic Society of America",
"volume": "",
"issue": "",
"pages": "547--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James K. Baker. 1979. Trainable grammars for speech recognition. In Speech Communication Papers for the 97th Meeting of the Acoustic Society of America, pages 547-550.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Statistical properties of probabilistic context-free grammars",
"authors": [
{
"first": "Zhiyi",
"middle": [],
"last": "Chi",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "1",
"pages": "131--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiyi Chi. 1999. Statistical properties of proba- bilistic context-free grammars. Computational Linguistics, 25(1):131-160.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Word association norms, mutual information, and lexicography",
"authors": [
{
"first": "Kenneth",
"middle": [
"Ward"
],
"last": "Church",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22-29.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Distributional learning of context-free and multiple context-free grammars",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Ryo",
"middle": [],
"last": "Yoshinaka",
"suffix": ""
}
],
"year": 2016,
"venue": "Topics in Grammatical Inference",
"volume": "",
"issue": "",
"pages": "143--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Clark and Ryo Yoshinaka. 2016. Distri- butional learning of context-free and multiple context-free grammars. In Jeffrey Heinz and M. Jos\u00e9 Sempere, editors, Topics in Grammat- ical Inference, pages 143-172, Springer Berlin Heidelberg, Berlin, Heidelberg.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Empirical risk minimization for probabilistic grammars: Sample complexity and hardness of learning",
"authors": [
{
"first": "B",
"middle": [],
"last": "Shay",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Linguistics",
"volume": "38",
"issue": "3",
"pages": "479--526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shay B. Cohen and Noah A. Smith. 2012. Empirical risk minimization for probabilistic grammars: Sample complexity and hardness of learning. Computational Linguistics, 38(3): 479-526.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning regular languages using RFSAs",
"authors": [
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Denis",
"suffix": ""
},
{
"first": "Aur\u00e9lien",
"middle": [],
"last": "Lemay",
"suffix": ""
},
{
"first": "Alain",
"middle": [],
"last": "Terlutte",
"suffix": ""
}
],
"year": 2004,
"venue": "Theoretical Computer Science",
"volume": "313",
"issue": "2",
"pages": "267--294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fran\u00e7ois Denis, Aur\u00e9lien Lemay, and Alain Terlutte. 2004. Learning regular languages using RFSAs. Theoretical Computer Science, 313(2):267-294.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Inside-outside and forwardbackward algorithms are just backprop (tutorial paper)",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop on Structured Prediction for NLP",
"volume": "",
"issue": "",
"pages": "1--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner. 2016. Inside-outside and forward- backward algorithms are just backprop (tutorial paper). In Proceedings of the Workshop on Structured Prediction for NLP, pages 1-17.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Polynomial time algorithms for multi-type branching processes and stochastic context-free grammars",
"authors": [
{
"first": "Kousha",
"middle": [],
"last": "Etessami",
"suffix": ""
},
{
"first": "Alistair",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "Mihalis",
"middle": [],
"last": "Yannakakis",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Forty-Fourth Annual ACM Symposium on Theory of Computing",
"volume": "",
"issue": "",
"pages": "579--588",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kousha Etessami, Alistair Stewart, and Mihalis Yannakakis. 2012. Polynomial time algorithms for multi-type branching processes and stochas- tic context-free grammars. In Proceedings of the Forty-Fourth Annual ACM Symposium on Theory of Computing, pages 579-588. ACM.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Affectedness and direct objects: The role of lexical semantics in the acquisition of verb argument structure",
"authors": [
{
"first": "Jess",
"middle": [],
"last": "Gropen",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Pinker",
"suffix": ""
},
{
"first": "Michelle",
"middle": [],
"last": "Hollander",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 1991,
"venue": "Cognition",
"volume": "41",
"issue": "1",
"pages": "153--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jess Gropen, Steven Pinker, Michelle Hollander, and Richard Goldberg. 1991. Affectedness and direct objects: The role of lexical semantics in the acquisition of verb argument structure. Cognition, 41(1):153-195.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "From phonemes to morphemes. Language",
"authors": [
{
"first": "Zellig",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1955,
"venue": "",
"volume": "31",
"issue": "",
"pages": "190--222",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig Harris. 1955. From phonemes to mor- phemes. Language, 31:190-222.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A Study of Grammatical Inference",
"authors": [
{
"first": "James",
"middle": [
"Jay"
],
"last": "Horning",
"suffix": ""
}
],
"year": 1969,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Jay Horning. 1969. A Study of Grammatical Inference. Ph.D. thesis, Computer Science Department, Stanford University.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Identifiability and unmixing of latent parse trees",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sham",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Kakade",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "1520--1528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Hsu, Sham M. Kakade, and Percy Liang. 2013. Identifiability and unmixing of latent parse trees. In Advances in Neural Information Processing Systems (NIPS), pages 1520-1528.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Moments of string and derivation lengths of stochastic context-free grammars",
"authors": [
{
"first": "Sandra",
"middle": [
"E"
],
"last": "Hutchins",
"suffix": ""
}
],
"year": 1972,
"venue": "Information Sciences",
"volume": "4",
"issue": "2",
"pages": "179--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra E. Hutchins. 1972. Moments of string and derivation lengths of stochastic context-free grammars. Information Sciences, 4(2):179-191.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The estimation of stochastic context-free grammars using the inside-outside algorithm",
"authors": [
{
"first": "Karim",
"middle": [],
"last": "Lari",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Speech and Language",
"volume": "4",
"issue": "",
"pages": "35--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karim Lari and Stephen J. Young. 1990. The estimation of stochastic context-free grammars using the inside-outside algorithm. Computer Speech and Language, 4:35-56.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning of context-free languages: A survey of the literature",
"authors": [
{
"first": "Lillian",
"middle": [
"Lee"
],
"last": "",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lillian Lee. 1996. Learning of context-free lang- uages: A survey of the literature. Technical Report TR-12-96, Center for Research in Computing Technology, Harvard University.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "On the unsupervised induction of phrase-structure grammars",
"authors": [
{
"first": "G",
"middle": [],
"last": "Carl",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "De Marcken",
"suffix": ""
}
],
"year": 1999,
"venue": "Natural Language Processing Using Very Large Corpora",
"volume": "",
"issue": "",
"pages": "191--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carl G. de Marcken. 1999. On the unsupervised induction of phrase-structure grammars. In Nat- ural Language Processing Using Very Large Corpora, pages 191-208. Kluwer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Language learning as language use: A cross-linguistic model of child language development",
"authors": [
{
"first": "M",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "Morten",
"middle": [
"H"
],
"last": "Mccauley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Christiansen",
"suffix": ""
}
],
"year": 2019,
"venue": "Psychological Review",
"volume": "126",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stewart M. McCauley and Morten H. Christiansen. 2019. Language learning as lan- guage use: A cross-linguistic model of child language development. Psychological Review, 126(1):1.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Computing partition functions of PCFGs",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Nederhof",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2008,
"venue": "Research on Language and Computation",
"volume": "6",
"issue": "2",
"pages": "139--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark-Jan Nederhof and Giorgio Satta. 2008. Com- puting partition functions of PCFGs. Research on Language and Computation, 6(2):139-162.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "On structuring probabilistic dependencies in stochastic language modelling",
"authors": [
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Ute",
"middle": [],
"last": "Essen",
"suffix": ""
},
{
"first": "Reinhard",
"middle": [],
"last": "Kneser",
"suffix": ""
}
],
"year": 1994,
"venue": "Computer Speech and Language",
"volume": "8",
"issue": "",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hermann Ney, Ute Essen, and Reinhard Kneser. 1994. On structuring probabilistic dependencies in stochastic language modelling. Computer Speech and Language, 8:1-38.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Computational models of acquisition for islands",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Pearl",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Sprouse",
"suffix": ""
}
],
"year": 2012,
"venue": "Experimental Syntax and Island Effects",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lisa Pearl and Jon Sprouse. 2012. Computational models of acquisition for islands. In J. Sprouse and N. Hornstein, editors, Experimental Syntax and Island Effects. Cambridge University Press, Cambridge, UK.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Insideoutside reestimation from partially bracketed corpora",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Schabes",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 30th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "128--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Pereira and Yves Schabes. 1992. Inside- outside reestimation from partially bracketed corpora. In Proceedings of the 30th Annual Meeting of the Association for Computational Linguistics, pages 128-135.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "On measures of entropy and information",
"authors": [
{
"first": "Alfr\u00e9d",
"middle": [],
"last": "R\u00e9nyi",
"suffix": ""
}
],
"year": 1961,
"venue": "Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alfr\u00e9d R\u00e9nyi. 1961. On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics. The Regents of the University of California.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Statistical learning by eight month old infants",
"authors": [
{
"first": "Jenny",
"middle": [
"R"
],
"last": "Saffran",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"N"
],
"last": "Aslin",
"suffix": ""
},
{
"first": "Elissa",
"middle": [
"L"
],
"last": "Newport",
"suffix": ""
}
],
"year": 1996,
"venue": "Science",
"volume": "274",
"issue": "",
"pages": "1926--1928",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny R. Saffran, Richard N. Aslin, and Elissa L. Newport. 1996. Statistical learning by eight month old infants. Science, 274:1926-1928.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "On multiple context-free grammars",
"authors": [
{
"first": "Hiroyuki",
"middle": [],
"last": "Seki",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Matsumura",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Fujii",
"suffix": ""
},
{
"first": "Tadao",
"middle": [],
"last": "Kasami",
"suffix": ""
}
],
"year": 1991,
"venue": "Theoretical Computer Science",
"volume": "88",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroyuki Seki, Takashi Matsumura, Mamoru Fujii, and Tadao Kasami. 1991. On multiple context-free grammars. Theoretical Computer Science, 88(2):229.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Probabilistic learnability of context-free grammars with basic distributional properties from positive examples",
"authors": [
{
"first": "Chihiro",
"middle": [],
"last": "Shibata",
"suffix": ""
},
{
"first": "Ryo",
"middle": [],
"last": "Yoshinaka",
"suffix": ""
}
],
"year": 2016,
"venue": "Theoretical Computer Science",
"volume": "620",
"issue": "",
"pages": "46--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chihiro Shibata and Ryo Yoshinaka. 2016. Probabilistic learnability of context-free gram- mars with basic distributional properties from positive examples. Theoretical Computer Sci- ence, 620:46-72.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Weighted and probabilistic context-free grammars are equally expressive",
"authors": [
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "4",
"pages": "477--491",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah A. Smith and Mark Johnson. 2007. Weighted and probabilistic context-free gram- mars are equally expressive. Computational Linguistics, 33(4):477-491.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Unsupervised part-of-speech tagging with anchor hidden Markov models",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Stratos",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Hsu",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "245--257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Stratos, Michael Collins, and Daniel Hsu. 2016. Unsupervised part-of-speech tagging with anchor hidden Markov models. Trans- actions of the Association for Computational Linguistics, 4:245-257.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Diagram showing the terminal selection algorithm for a grammar with three nonterminals with anchors a, b, c. This diagram represents the space of context distributions: All terminals have a context distibution in the convex hull of the anchors. d \u2208 \u0393 because \u03c1(a \u2192 d) < \u221e but \u03c1(d \u2192 a) = \u221e, and it is therefore in the interior of the convex hull.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Box and whisker plot showing labeled exact match for 100 grammars sampled with \u03b1 = 0.01. We compare algorithms A, B, and C against gold (the target PCFG) and ML (the maximum likelihood PCFG learned by supervised learning from the training data).",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Box and whisker plot showing unlabeled accuracy. We add trivial baselines of left and right branching and random trees. 100 grammars sampled with \u03b1 = 0.01.",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": "Scatter plot showing unlabeled exact match with the x-axis showing the ML model and the y-axis showing the algorithm C for three different values of the Dirichlet hyperparameter for the binary rules, \u03b1 = 0.01, 0.1, and 1.0. The diagonal line is the theoretical upper bound.",
"type_str": "figure"
},
"TABREF0": {
"text": "If we have a production [[a]] \u2192 [[b]][[c]] in the grammar, we know there is a context such that",
"type_str": "table",
"content": "<table><tr><td>\u2126(S, lwr) = \u2126(S, l[[a]]r) \u2297 \u2126([[a]], w) where all the elements of \u2126(A, w) have an occurrence of</td></tr><tr><td>[[a]] \u2192 [[b]][[c]] at the root. Because we know that \u2126([[a]], bc) consists of a single tree using [[a]] \u2192 [[b]][[c]]; and \u2126(S, l[[b]][[c]]r) = \u2126(S, l[[a]]r) \u2297 \u2126([[a]], [[b]][[c]]), therefore \u2126(S, lbcr) =</td></tr><tr><td>\u2126(S, l[[a]]r)</td></tr></table>",
"num": null,
"html": null
},
"TABREF1": {
"text": "If [[a]] \u2192 b is not in the grammar, then",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
}
}
}
}