ACL-OCL / Base_JSON /prefixJ /json /J05 /J05-2002.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J05-2002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:50:25.695043Z"
},
"title": "A General Technique to Train Language Models on Language Models",
"authors": [
{
"first": "Mark-Jan",
"middle": [],
"last": "Nederhof",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {}
},
"email": "markjan@let.rug.nl.submission"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We show that under certain conditions, a language model can be trained on the basis of a second language model. The main instance of the technique trains a finite automaton on the basis of a probabilistic context-free grammar, such that the Kullback-Leibler distance between grammar and trained automaton is provably minimal. This is a substantial generalization of an existing algorithm to train an n-gram model on the basis of a probabilistic context-free grammar.",
"pdf_parse": {
"paper_id": "J05-2002",
"_pdf_hash": "",
"abstract": [
{
"text": "We show that under certain conditions, a language model can be trained on the basis of a second language model. The main instance of the technique trains a finite automaton on the basis of a probabilistic context-free grammar, such that the Kullback-Leibler distance between grammar and trained automaton is provably minimal. This is a substantial generalization of an existing algorithm to train an n-gram model on the basis of a probabilistic context-free grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this article, the term language model is used to refer to any description that assigns probabilities to strings over a certain alphabet. Language models have important applications in natural language processing, and in particular, in speech recognition systems (Manning and Sch\u00fctze 1999) .",
"cite_spans": [
{
"start": 265,
"end": 291,
"text": "(Manning and Sch\u00fctze 1999)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Language models often consist of a symbolic description of a language, such as a finite automaton (FA) or a context-free grammar (CFG), extended by a probability assignment to, for example, the transitions of the FA or the rules of the CFG, by which we obtain a probabilistic finite automaton (PFA) or probabilistic context-free grammar (PCFG), respectively. For certain applications, one may first determine the symbolic part of the automaton or grammar and in a second phase try to find reliable probability estimates for the transitions or rules. The current article is involved with the second problem, that of extending FAs or CFGs to become PFAs or PCFGs. We refer to this process as training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Training is often done on the basis of a corpus of actual language use in a certain domain. If each sentence in this corpus is annotated by a list of transitions of an FA recognizing the sentence or a parse tree for a CFG generating the sentence, then training may consist simply in relative frequency estimation. This means that we estimate probabilities of transitions or rules by counting their frequencies in the corpus, relative to the frequencies of the start states of transitions or to the frequencies of the left-hand side nonterminals of rules, respectively. By this estimation, the likelihood of the corpus is maximized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The technique we introduce in this article is different in that training is done on the basis not of a finite corpus, but of an input language model. Our goal is to find estimations for the probabilities of transitions or rules of the input FA or CFG such that the resulting PFA or PCFG approximates the input language model as well as possible, or more specifically, such that the Kullback-Leibler (KL) distance (or relative entropy) between the input model and the trained model is minimized. The input FA or CFG to be trained may be structurally unrelated to the input language model. This technique has several applications. One is an extension with probabilities of existing work on approximation of CFGs by means of FAs (Nederhof 2000) . The motivation for this work was that application of FAs is generally less costly than application of CFGs, which is an important benefit when the input is very large, as is often the case in, for example, speech recognition systems. The practical relevance of this work was limited, however, by the fact that in practice one is more interested in the probabilities of sentences than in a purely Boolean distinction between grammatical and ungrammatical sentences.",
"cite_spans": [
{
"start": 726,
"end": 741,
"text": "(Nederhof 2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Several approaches were discussed by Mohri and Nederhof (2001) to extend this work to approximation of PCFGs by means of PFAs. A first approach is to directly map rules with attached probabilities to transitions with attached probabilities. Although this is computationally the easiest approach, the resulting PFA may be a very inaccurate approximation of the probability distribution described by the input PCFG. In particular, there may be assignments of probabilities to the transitions of the same FA that lead to more accurate approximating language models.",
"cite_spans": [
{
"start": 37,
"end": 62,
"text": "Mohri and Nederhof (2001)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "A second approach is to train the approximating FA by means of a corpus. If the input PCFG was itself obtained by training on a corpus, then we already possess training material. However, this may not always be the case, and no training material may be available. Furthermore, as a determinized approximating FA may be much larger than the input PCFG, the sparse-data problem may be more severe for the automaton than it was for the grammar. 1 Hence, even if sufficient material was available to train the CFG, it may not be sufficient to accurately train the FA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "A third approach is to construct a training corpus from the PCFG by means of a (pseudo)random generator of sentences, such that sentences that are more likely according to the PCFG are generated with greater likelihood. This has been proposed by Jurafsky et al. (1994) , for the special case of bigrams, extending a nonprobabilistic technique by Zue et al. (1991) . It is not clear, however, whether this idea is feasible for training of finite-state models that are larger than bigrams. The reason is that very large corpora would have to be generated in order to obtain accurate probability estimates for the PFA. Note that the number of parameters of a bigram model is bounded by the square of the size of the lexicon; such a bound does not exist for general PFAs.",
"cite_spans": [
{
"start": 246,
"end": 268,
"text": "Jurafsky et al. (1994)",
"ref_id": "BIBREF4"
},
{
"start": 346,
"end": 363,
"text": "Zue et al. (1991)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The current article discusses a fourth approach. In the limit, it is equivalent to the third approach above, as if an infinite corpus were constructed on which the PFA is trained, but we have found a way to avoid considering sentences individually. The key idea that allows us to handle an infinite set of strings generated by the PCFG is that we construct a new grammar that represents the intersection of the languages described by the input PCFG and the FA. Within this new grammar, we can compute the expected frequencies of transitions of the FA, using a fairly standard analysis of PCFGs. These expected frequencies then allow us to determine the assignment of probabilities to transitions of the FA that minimizes the KL distance between the PCFG and the resulting PFA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The only requirement is that the FA to be trained be unambiguous, by which we mean that each input string can be recognized by at most one computation of the FA. The special case of n-grams has already been formulated by Stolcke and Segal (1994) , realizing an idea previously envisioned by Rimon and Herz (1991) . An n-gram model is here seen as a (P)FA that contains exactly one state for each possible history of the n \u2212 1 previously read symbols. It is clear that such an FA is unambiguous (even deterministic) and that our technique therefore properly subsumes the technique by Stolcke and Segal (1994) , although the way that the two techniques are formulated is rather different. Also note that the FA underlying an n-gram model accepts any input string over the alphabet, which does not hold for general (unambiguous) FAs.",
"cite_spans": [
{
"start": 221,
"end": 245,
"text": "Stolcke and Segal (1994)",
"ref_id": "BIBREF16"
},
{
"start": 291,
"end": 312,
"text": "Rimon and Herz (1991)",
"ref_id": "BIBREF12"
},
{
"start": 583,
"end": 607,
"text": "Stolcke and Segal (1994)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Another application of our work involves determinization and minimization of PFAs. As shown by Mohri (1997) , PFAs cannot always be determinized, and no practical algorithms are known to minimize arbitrary nondeterministic (P)FAs. This can be a problem when deterministic or small PFAs are required. We can, however, always compute a minimal deterministic FA equivalent to an input FA. The new results in this article offer a way to extend this determinized FA to a PFA such that it approximates the probability distribution described by the input PFA as well as possible, in terms of the KL distance.",
"cite_spans": [
{
"start": 95,
"end": 107,
"text": "Mohri (1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Although the proposed technique has some limitations, in particular, that the model to be trained is unambiguous, it is by no means restricted to language models based on finite automata or context-free grammars, as several other probabilistic grammatical formalisms can be treated in a similar manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The structure of this article is as follows. We provide some preliminary definitions in Section 2. Section 3 discusses how the expected frequency of a rule in a PCFG can be computed. This is an auxiliary step in the algorithms to be discussed below. Section 4 defines a way to combine a PFA and a PCFG into a new PCFG that extends a well-known representation of the intersection of a regular and a context-free language. Thereby we merge the input model and the model to be trained into a single structure. This structure is the foundation for a number of algorithms, presented in section 5, which allow, respectively, training of an unambiguous FA on the basis of a PCFG (section 5.1), training of an unambiguous CFG on the basis of a PFA (section 5.2), and training of an unambiguous FA on the basis of a PFA (section 5.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Many of the definitions on probabilistic context-free grammars are based on Santos (1972) and Booth and Thompson (1973) , and the definitions on probabilistic finite automata are based on Paz (1971) and Starke (1972) .",
"cite_spans": [
{
"start": 76,
"end": 89,
"text": "Santos (1972)",
"ref_id": "BIBREF13"
},
{
"start": 94,
"end": 119,
"text": "Booth and Thompson (1973)",
"ref_id": "BIBREF2"
},
{
"start": 188,
"end": 198,
"text": "Paz (1971)",
"ref_id": "BIBREF11"
},
{
"start": 203,
"end": 216,
"text": "Starke (1972)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "A context-free grammar G is a 4-tuple (\u03a3, N, S, R), where \u03a3 and N are two finite disjoint sets of terminals and nonterminals, respectively, S \u2208 N is the start symbol, and R is a finite set of rules, each of the form A \u2192 \u03b1, where A \u2208 N and \u03b1 \u2208 (\u03a3 \u222a N) * . A probabilistic context-free grammar G is a 5-tuple (\u03a3, N, S, R, p G ), where \u03a3, N, S and R are as above, and p G is a function from rules in R to probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "In what follows, symbol a ranges over the set \u03a3, symbols w, v range over the set \u03a3 * , symbols A, B range over the set N, symbol X ranges over the set \u03a3 \u222a N, symbols \u03b1, \u03b2, \u03b3 range over the set (\u03a3 \u222a N) * , symbol \u03c1 ranges over the set R, and symbols d, e range over the set R * . With slight abuse of notation, we treat a rule \u03c1 = (A \u2192 \u03b1) \u2208 R as an atomic symbol when it occurs within a string d\u03c1e \u2208 R * . The symbol denotes the empty string. String concatenation is represented by operator \u2022 or by empty space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "For a fixed (P)CFG G, we define the relation \u21d2 on triples consisting of two strings \u03b1, \u03b2 \u2208 (\u03a3 \u222a N) * and a rule \u03c1 \u2208 R by \u03b1 \u03c1 \u21d2 \u03b2, if and only if \u03b1 is of the form wA\u03b4 and \u03b2 is of the form w\u03b3\u03b4, for some w \u2208 \u03a3 * and \u03b4 \u2208 (\u03a3 \u222a N) * , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "\u03c1 = (A \u2192 \u03b3). A leftmost derivation (in G) is a string d = \u03c1 1 \u2022 \u2022 \u2022 \u03c1 m , m \u2265 0, such that \u03b1 0 \u03c1 1 \u21d2 \u03b1 1 \u03c1 2 \u21d2 \u2022 \u2022 \u2022 \u03c1m \u21d2 \u03b1 m , for some \u03b1 0 , . . . , \u03b1 m \u2208 (\u03a3 \u222a N) * ; d =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "is always a leftmost derivation. In the remainder of this article, we let the term derivation refer to leftmost derivation, unless specified otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "If \u03b1 0 \u03c1 1 \u21d2 \u2022 \u2022 \u2022 \u03c1m \u21d2 \u03b1 m for some \u03b1 0 , . . . , \u03b1 m \u2208 (\u03a3 \u222a N) * , then we say that d = \u03c1 1 \u2022 \u2022 \u2022 \u03c1 m derives \u03b1 m from \u03b1 0 , and we write \u03b1 0 d \u21d2 \u03b1 m ; derives any \u03b1 0 \u2208 (\u03a3 \u222a N) * from itself. A derivation d such that S d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "\u21d2 w, for some w \u2208 \u03a3 * , is called a complete derivation. We say that G is unambiguous if for each w \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "\u03a3 * , S d \u21d2 w for at most one d \u2208 R * . Let G be a fixed PCFG (\u03a3, N, S, R, p G ). For \u03b1, \u03b2 \u2208 (\u03a3 \u222a N) * and d = \u03c1 1 \u2022 \u2022 \u2022 \u03c1 m \u2208 R * , m \u2265 0, we define p G (\u03b1 d \u21d2 \u03b2) = m i=1 p G (\u03c1 i ) if \u03b1 d \u21d2 \u03b2, and p G (\u03b1 d \u21d2 \u03b2) = 0 otherwise. The probability p G (w) of a string w \u2208 \u03a3 * is defined to be d p G (S d \u21d2 w). PCFG G is said to be proper if \u03c1,\u03b1 p G (A \u03c1 \u21d2 \u03b1) = 1 for all A \u2208 N,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "that is, if the probabilities of all rules \u03c1 = (A \u2192 \u03b1) with left-hand side A sum to one. PCFG G is said to be consistent if w p G (w) = 1. Consistency implies that the PCFG defines a probability distribution on the set of terminal strings. There is a practical sufficient condition for consistency that is decidable (Booth and Thompson 1973) .",
"cite_spans": [
{
"start": 316,
"end": 341,
"text": "(Booth and Thompson 1973)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "A PCFG is said to be reduced if for each nonterminal A, there are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "d 1 , d 2 \u2208 R * , w 1 , w 2 \u2208 \u03a3 * , and \u03b2 \u2208 (\u03a3 \u222a N) * such that p G (S d 1 \u21d2 w 1 A\u03b2) \u2022 p G (w 1 A\u03b2 d 2 \u21d2 w 1 w 2 ) > 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "In words, if a PCFG is reduced, then for each nonterminal A, there is at least one derivation d 1 d 2 with nonzero probability that derives a string w 1 w 2 from S and that includes some rule with left-hand side A. A PCFG G that is not reduced can be turned into one that is reduced and that describes the same probability distribution, provided that w p G (w) > 0. This reduction consists in removing from the grammar any nonterminal A for which the above conditions do not hold, together with any rule that contains such a nonterminal; see Aho and Ullman (1972) for reduction of CFGs, which is very similar.",
"cite_spans": [
{
"start": 542,
"end": 563,
"text": "Aho and Ullman (1972)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "A finite automaton M is a 5-tuple (\u03a3, Q, q 0 , q f , T), where \u03a3 and Q are two finite sets of terminals and states, respectively, q 0 , q f \u2208 Q are the initial and final states, respectively, and T is a finite set of transitions, each of the form r a \u2192 s, where r \u2208 Q \u2212 {q f }, s \u2208 Q, and a \u2208 \u03a3. 2 A probabilistic finite automaton M is a 6-tuple (\u03a3, Q, q 0 , q f , T, p M ), where \u03a3, Q, q 0 , q f , and T are as above, and p M is a function from transitions in T to probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "In what follows, symbols q, r, s range over the set Q, symbol \u03c4 ranges over the set T, and symbol c ranges over the set T * .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "For a fixed (P)FA M, we define a configuration to be an element of Q \u00d7 \u03a3 * , and we define the relation on triples consisting of two configurations and a transition \u03c4 \u2208 T by (r, w) \u03c4 (s, w ) if and only if w is of the form aw , for some a \u2208 \u03a3, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "\u03c4 = (r a \u2192 s). A computation (in M) is a string c = \u03c4 1 \u2022 \u2022 \u2022 \u03c4 m , m \u2265 0, such that (r 0 , w 0 ) \u03c4 1 (r 1 , w 1 ) \u03c4 2 \u2022 \u2022 \u2022 \u03c4m (r m , w m ), for some (r 0 , w 0 ), . . . , (r m , w m ) \u2208 Q \u00d7 \u03a3 * ; c = is always a compu- tation. If (r 0 , w 0 ) \u03c4 1 \u2022 \u2022 \u2022 \u03c4m (r m , w m ) for some (r 0 , w 0 ), . . . , (r m , w m ) \u2208 Q \u00d7 \u03a3 * and c = \u03c4 1 \u2022 \u2022 \u2022 \u03c4 m \u2208 T *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": ", then we write (r 0 , w 0 ) c (r m , w m ). We say that c recognizes w if (q 0 , w) c (q f , ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "Let M be a fixed FA (\u03a3, Q, q 0 , q f , T). The language L(M) accepted by M is defined to be {w \u2208 \u03a3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "* | \u2203 c [(q \uf730 , w) c (q f , )]}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "We say M is unambiguous if for each w \u2208 \u03a3 * , (q 0 , w) c (q f , ) for at most one c \u2208 T * . We say M is deterministic if for each (r, w) \u2208 Q \u00d7 \u03a3 * , there is at most one combination of \u03c4 \u2208 T and (s, w ) \u2208 Q \u00d7 \u03a3 * such that (r, w) \u03c4 (s, w ). Turning a given FA into one that is deterministic and accepts the same language is called determinization. All FAs can be determinized. Turning a given (deterministic) FA into the smallest (deterministic) FA that accepts the same language is called minimization. There are effective algorithms for minimization of deterministic FAs. Let M be a fixed PFA (\u03a3, Q, q 0 , q f , T, p M ). For (r, w), (s, v) \u2208 Q \u00d7 \u03a3 * and c = \u03c4",
"cite_spans": [
{
"start": 637,
"end": 643,
"text": "(s, v)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "1 \u2022 \u2022 \u2022 \u03c4 m \u2208 T * , we define p M ((r, w) c (s, v)) = m i=1 p M (\u03c4 i ) if (r, w) c (s, v),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "p M ((r, w) c (s, v)) = 0 otherwise. The probability p M (w) of a string w \u2208 \u03a3 * is defined to be c p M ((q 0 , w) c (q f , )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "PFA M is said to be proper if \u03c4,a,s: \u03c4=(r a \u2192s)\u2208T p M (\u03c4) = 1 for all r \u2208 Q \u2212 {q f }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2."
},
{
"text": "Let G be a PCFG (\u03a3, N, S, R, p G ). We assume without loss of generality that S does not occur in the right-hand side of any rule from R. For each rule \u03c1, we define",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expected Frequencies of Rules",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E(\u03c1) = d,d ,w p G (S d\u03c1d \u21d2 w)",
"eq_num": "( 1 )"
}
],
"section": "Expected Frequencies of Rules",
"sec_num": "3."
},
{
"text": "If G is proper and consistent, (1) is the expected frequency of \u03c1 in a complete derivation. Each complete derivation d\u03c1d can be written as d\u03c1d d , with",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expected Frequencies of Rules",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d = d d , where S d \u21d2 w A\u03b2, A \u03c1 \u21d2 \u03b1, \u03b1 d \u21d2 w , \u03b2 d \u21d2 w",
"eq_num": "(2)"
}
],
"section": "Expected Frequencies of Rules",
"sec_num": "3."
},
{
"text": "for some A, \u03b1, \u03b2, w , w , and w . Therefore",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expected Frequencies of Rules",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E(\u03c1) = outer(A) \u2022 p G (\u03c1) \u2022 inner(\u03b1)",
"eq_num": "( 3 )"
}
],
"section": "Expected Frequencies of Rules",
"sec_num": "3."
},
{
"text": "where we define",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expected Frequencies of Rules",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "outer(A) = d,w ,\u03b2,d ,w p G (S d \u21d2 w A\u03b2) \u2022 p G (\u03b2 d \u21d2 w ) ( 4 ) inner(\u03b1) = d ,w p G (\u03b1 d \u21d2 w )",
"eq_num": "( 5 )"
}
],
"section": "Expected Frequencies of Rules",
"sec_num": "3."
},
{
"text": "for each A \u2208 N and \u03b1 \u2208 (\u03a3 \u222a N) * . From the definition of inner, we can easily derive the following equations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expected Frequencies of Rules",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "inner(a) = 1 ( 6 ) inner(A) = \u03c1,\u03b1: \u03c1=(A\u2192\u03b1) p G (\u03c1) \u2022 inner(\u03b1)",
"eq_num": "( 7 )"
}
],
"section": "Expected Frequencies of Rules",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "inner(X\u03b2) = inner(X) \u2022 inner(\u03b2)",
"eq_num": "( 8 )"
}
],
"section": "Expected Frequencies of Rules",
"sec_num": "3."
},
{
"text": "This can be taken as a recursive definition of inner, assuming \u03b2 = in (8). Similarly, we can derive a recursive definition of outer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expected Frequencies of Rules",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "outer(S) = 1 ( 9 ) outer(A) = \u03c1,B,\u03b1,\u03b2: \u03c1=(B\u2192\u03b1A\u03b2) outer(B) \u2022 p G (\u03c1) \u2022 inner(\u03b1) \u2022 inner(\u03b2)",
"eq_num": "(10)"
}
],
"section": "Expected Frequencies of Rules",
"sec_num": "3."
},
{
"text": "for A = S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expected Frequencies of Rules",
"sec_num": "3."
},
{
"text": "In general, there may be cyclic dependencies in the equations for inner and outer; that is, for certain nonterminals A, inner(A) and outer(A) may be defined in terms of themselves. There may even be no closed-form expression for inner(A). However, one may approximate the solutions to arbitrary precision by means of fixed-point iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expected Frequencies of Rules",
"sec_num": "3."
},
{
"text": "We recall a construction from Bar-Hillel, Perles, and Shamir (1964) that computes the intersection of a context-free language and a regular language. The input consists of a CFG G = (\u03a3, N, S, R) and an FA M = (\u03a3, Q, q 0 , q f , T); note that we assume, without loss of generality, that G and M share the same set of terminals \u03a3.",
"cite_spans": [
{
"start": 30,
"end": 67,
"text": "Bar-Hillel, Perles, and Shamir (1964)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "The output of the construction is CFG",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "G \u2229 = (\u03a3, N \u2229 , S \u2229 , R \u2229 ), where N \u2229 = Q \u00d7 (\u03a3 \u222a N) \u00d7 Q, S \u2229 = (q 0 , S, q f )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": ", and R \u2229 consists of the set of rules that is obtained as follows: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "r For each rule \u03c1 = (A \u2192 X 1 \u2022 \u2022 \u2022 X m ) \u2208 R,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "= ((r 0 , A, r m ) \u2192 (r 0 , X 1 , r 1 ) \u2022 \u2022 \u2022 (r m\u22121 , X m , r m )) be in R \u2229 ; for m = 0, R \u2229 contains a rule \u03c1 \u2229 = ((r 0 , A, r 0 ) \u2192 ) for each state r 0 . r For each transition \u03c4 = (r a \u2192 s) \u2208 T, let the rule \u03c1 \u2229 = ((r, a, s) \u2192 a) be in R \u2229 . Note that for each rule (r 0 , A, r m ) \u2192 (r 0 , X 1 , r 1 ) \u2022 \u2022 \u2022 (r m\u22121 , X m , r m ) from R \u2229 , there is a unique rule A \u2192 X 1 \u2022 \u2022 \u2022 X m from R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "from which it has been constructed by the above. Similarly, each rule (r, a, s) \u2192 a uniquely identifies a transition r a \u2192 s. This means that if we take a derivation d \u2229 in G \u2229 , we can extract a sequence h 1 (d \u2229 ) of rules from G and a sequence h 2 (d \u2229 ) of transitions from M, where h 1 and h 2 are string homomorphisms that we define pointwise as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h 1 (\u03c1 \u2229 ) = \u03c1 if \u03c1 \u2229 = ((r 0 , A, r m ) \u2192 (r 0 , X 1 , r 1 ) \u2022 \u2022 \u2022 (r m\u22121 , X m , r m )) and \u03c1 = (A \u2192 X 1 \u2022 \u2022 \u2022 X m ) (11) if \u03c1 \u2229 = ((r, a, s) \u2192 a)",
"eq_num": "(12)"
}
],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h 2 (\u03c1 \u2229 ) = \u03c4 if \u03c1 \u2229 = ((r, a, s) \u2192 a) and \u03c4 = (r a \u2192 s)",
"eq_num": "(13)"
}
],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "if",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c1 \u2229 = ((r 0 , A, r m ) \u2192 (r 0 , X 1 , r 1 ) \u2022 \u2022 \u2022 (r m\u22121 , X m , r m ))",
"eq_num": "(14)"
}
],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "We define Lang (1994) that G \u2229 can be seen as a parse forest, that is, a compact representation of all parse trees according to G that derive strings recognized by M. The construction can be generalized to, for example, tree-adjoining grammars (Vijay-Shanker and Weir 1993) and range concatenation grammars (Boullier 2000; Bertsch and Nederhof 2001) . The construction for the latter also has implications for linear context-free rewriting systems (Seki et al. 1991) .",
"cite_spans": [
{
"start": 10,
"end": 21,
"text": "Lang (1994)",
"ref_id": "BIBREF5"
},
{
"start": 307,
"end": 322,
"text": "(Boullier 2000;",
"ref_id": "BIBREF3"
},
{
"start": 323,
"end": 349,
"text": "Bertsch and Nederhof 2001)",
"ref_id": "BIBREF1"
},
{
"start": 448,
"end": 466,
"text": "(Seki et al. 1991)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "h(d \u2229 ) = (h 1 (d \u2229 ), h 2 (d \u2229 )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "The construction has been extended by Nederhof and Satta (2003) to apply to a PCFG G = (\u03a3, N, S, R, p G ) and a PFA M = (\u03a3, Q, q 0 , q f , T, p M ). The output is a PCFG",
"cite_spans": [
{
"start": 38,
"end": 63,
"text": "Nederhof and Satta (2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "G \u2229 = (\u03a3, N \u2229 , S \u2229 , R \u2229 , p \u2229 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": ", where N \u2229 , S \u2229 , and R \u2229 are as before, and p \u2229 is defined by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p \u2229 ((r 0 , A, r m ) \u2192 (r 0 , X 1 , r 1 ) \u2022 \u2022 \u2022 (r m\u22121 , X m , r m )) = p G (A \u2192 X 1 \u2022 \u2022 \u2022 X m )",
"eq_num": "(15)"
}
],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "p \u2229 ((r, a, s) ",
"cite_spans": [
{
"start": 4,
"end": 14,
"text": "((r, a, s)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2192 a) = p M (r a \u2192 s)",
"eq_num": "(16)"
}
],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "If d \u2229 , d, and c are such that h(d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "\u2229 ) = (d, c), then clearly p \u2229 (d \u2229 ) = p G (d) \u2022 p M (c).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection of Context-Free and Regular Languages",
"sec_num": "4."
},
{
"text": "We restrict ourselves to a few cases of the general technique of training a model on the basis of another model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Models on Models",
"sec_num": "5."
},
{
"text": "Let us assume we have a proper and consistent PCFG G = (\u03a3, N, S, R, p G ) and an FA M = (\u03a3, Q, q 0 , q f , T) that is unambiguous. This FA may have resulted from (nonprobabilistic) approximation of CFG (\u03a3, N, S, R) , but it may also be totally unrelated to G.",
"cite_spans": [
{
"start": 202,
"end": 214,
"text": "(\u03a3, N, S, R)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training a PFA on a PCFG",
"sec_num": "5.1"
},
{
"text": "Note that an FA is guaranteed to be unambiguous if it is deterministic; any FA can be determinized. Our goal is now to assign probabilities to the transitions from FA M to obtain a proper PFA that approximates the probability distribution described by G as well as possible. Let us define 1 as the function that maps each transition from T to one. This means that for each r, w, c and s, 1 ((r, w) c (s, )) = 1 if (r, w) c (s, ), and 1((r, w) c (s, )) = 0 otherwise. Of the set of strings generated by G, a subset is recognized by computations of M; note again that there can be at most one such computation for each string. The expected frequency of a transition \u03c4 in such computations is given by",
"cite_spans": [],
"ref_spans": [
{
"start": 390,
"end": 397,
"text": "((r, w)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training a PFA on a PCFG",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E(\u03c4) = w,c,c p G (w) \u2022 1((q 0 , w) c\u03c4c (q f , ))",
"eq_num": "(17)"
}
],
"section": "Training a PFA on a PCFG",
"sec_num": "5.1"
},
{
"text": "Now we construct the PCFG G \u2229 as explained in section 4 from the PCFG G and the PFA (\u03a3, Q, q 0 , q f , T, 1). Let \u03c4 = (r a \u2192 s) \u2208 T and \u03c1 = ((r, a, s) \u2192 a). On the basis of the properties of function h, we can now rewrite E(\u03c4) as d,w,c,c : h(e) ",
"cite_spans": [
{
"start": 230,
"end": 244,
"text": "d,w,c,c : h(e)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training a PFA on a PCFG",
"sec_num": "5.1"
},
{
"text": "E(\u03c4) = d,w,c,c p G (S d \u21d2 w) \u2022 1((q 0 , w) c\u03c4c (q f , )) = e,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training a PFA on a PCFG",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "=(d,c\u03c4c ) p G (S d \u21d2 w) \u2022 1((q 0 , w) c\u03c4c (q f , )) = e,e ,w p \u2229 (S \u2229 e\u03c1e \u21d2 w) = E(\u03c1)",
"eq_num": "(18)"
}
],
"section": "Training a PFA on a PCFG",
"sec_num": "5.1"
},
{
"text": "Hereby we have expressed the expected frequency of a transition \u03c4 = (r a \u2192 s) in terms of the expected frequency of rule \u03c1 = ((r, a, s) \u2192 a) in derivations in PCFG G \u2229 . It was explained in section 3 how such a value can be computed. Note that since by definition 1(\u03c4) = 1, also p \u2229 (\u03c1) = 1. Furthermore, for the right-hand side a of \u03c1, inner(a) = 1. Therefore,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training a PFA on a PCFG",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E(\u03c4) = outer((r, a, s)) \u2022 p \u2229 (\u03c1) \u2022 inner(a) = outer((r, a, s))",
"eq_num": "(19)"
}
],
"section": "Training a PFA on a PCFG",
"sec_num": "5.1"
},
{
"text": "To obtain the required PFA (\u03a3, Q, q 0 , q f , T, p M ), we now define the probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training a PFA on a PCFG",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "function p M for each \u03c4 = (r a \u2192 s) \u2208 T as p M (\u03c4) = outer((r, a, s)) a ,s :(r a \u2192s )\u2208T outer((r, a , s ))",
"eq_num": "(20)"
}
],
"section": "Training a PFA on a PCFG",
"sec_num": "5.1"
},
{
"text": "That such a relative frequency estimator p M minimizes the KL distance between p G and p M on the domain L(M) is proven in the appendix. An example with finite languages is given in Figure 1 . We have, for example,",
"cite_spans": [],
"ref_spans": [
{
"start": 182,
"end": 190,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Training a PFA on a PCFG",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p M (q 0 a \u2192 q 1 ) = outer((q 0 , a, q 1 )) outer((q 0 , a, q 1 )) + outer((q 0 , c, q 1 )) = 1 3 1 3 + 2 3 = 1 3",
"eq_num": "(21)"
}
],
"section": "Training a PFA on a PCFG",
"sec_num": "5.1"
},
{
"text": "Similarly to section 5.1, we now assume we have a proper PFA M = (\u03a3, Q, q 0 , q f , T, p M ) and a CFG G = (\u03a3, N, S, R) that is unambiguous. Our goal is to find a function p G that lets proper and consistent PCFG (\u03a3, N, S, R, p G ) approximate M as well as possible. Although CFGs used for natural language processing are usually ambiguous, there may be cases in other fields in which we may assume grammars are unambiguous. Let us define 1 as the function that maps each rule from R to one. Of the set of strings recognized by M, a subset can be derived in G. The expected frequency of a rule \u03c1 in those derivations is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training a PCFG on a PFA",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E(\u03c1) = d,d ,w p M (w) \u2022 1(S d\u03c1d \u21d2 w)",
"eq_num": "(22)"
}
],
"section": "Training a PCFG on a PFA",
"sec_num": "5.2"
},
{
"text": "Now we construct the PCFG G \u2229 from the PCFG G = (\u03a3, N, S, R, 1) and the PFA M as explained in section 4. Analogously to section 5.1, we obtain for each \u03c1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training a PCFG on a PFA",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= (A \u2192 X 1 \u2022 \u2022 \u2022 X m ) E(\u03c1) = r 0 ,r 1 ,...,r m E((r 0 , A, r m ) \u2192 (r 0 , X 1 , r 1 ) \u2022 \u2022 \u2022 (r m\u22121 , X m , r m )) = r 0 ,r 1 ,...,r m outer((r 0 , A, r m )) \u2022 inner((r 0 , X 1 , r 1 ) \u2022 \u2022 \u2022 (r m\u22121 , X m , r m ))",
"eq_num": "(23)"
}
],
"section": "Training a PCFG on a PFA",
"sec_num": "5.2"
},
{
"text": "To obtain the required PCFG (\u03a3, N, S, R, p G ), we now define the probability function p G for each \u03c1 = (A \u2192 \u03b1) as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training a PCFG on a PFA",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p G (\u03c1) = E(\u03c1) \u03c1 =(A\u2192\u03b1 )\u2208R E(\u03c1 )",
"eq_num": "(24)"
}
],
"section": "Training a PCFG on a PFA",
"sec_num": "5.2"
},
{
"text": "The proof that this relative frequency estimator p G minimizes the KL distance between p M and p G on the domain L(G) is almost identical to the proof in the appendix for a similar claim from section 5.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training a PCFG on a PFA",
"sec_num": "5.2"
},
{
"text": "We now assume we have a proper PFA M 1 = (\u03a3, Q 1 , q 0,1 , q f,1 , T 1 , p 1 ) and an FA M 2 = (\u03a3, Q 2 , q 0,2 , q f,2 , T 2 ) that is unambiguous. Our goal is to find a function p 2 so that proper PFA (\u03a3, Q 2 , q 0,2 , q f,2 , T 2 , p 2 ) approximates M 1 as well as possible, minimizing the KL distance between p 1 and p 2 on the domain L(M 2 ). One way to solve this problem is to map M 2 to an equivalent right-linear CFG G and then to apply the algorithm from section 5.2. The obtained probability function p G can be translated back to an appropriate function p 2 . For this special case, the construction from section 4 can be simplified to the \"cross-product\" construction of finite automata (see, e.g., Aho and Ullman 1972) . The simplified forms of the functions inner and outer from section 3 are commonly called forward and backward, respectively, and they are defined by systems of linear equations. As a result, we can compute exact solutions, as opposed to approximate solutions by iteration.",
"cite_spans": [
{
"start": 712,
"end": 732,
"text": "Aho and Ullman 1972)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training a PFA on a PFA",
"sec_num": "5.3"
},
{
"text": "InNederhof (2000), several methods of approximation were discussed that lead to determinized approximating FAs that can be much larger than the input CFGs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "That we only allow one final state is not a serious restriction with regard to the set of strings we can process; only when the empty string is to be recognized could this lead to difficulties. Lifting the restriction would encumber the presentation with treatment of additional cases without affecting, however, the validity of the main results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Comments by Khalil Sima'an, Giorgio Satta, Yuval Krymolowski, and anonymous reviewers are gratefully acknowledged. The author is supported by the PIONIER Project Algorithms for Linguistic Processing, funded by NWO (Dutch Organization for Scientific Research).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "We now prove that the choice of p M in section 5.1 is such that it minimizes the Kullback-Leibler distance between p G and p M , restricted to the domain L(M). Without this restriction, the KL distance is given byThis can be used for many applications mentioned in section 1. For example, an FA M approximating a CFG G is guaranteed to be such that L(M) \u2287 L(G) in the case of most practical approximation algorithms. However, if there are strings w such that w / \u2208 L(M) and p G (w) > 0, then (25) is infinite, regardless of the choice of p M . We therefore restrict p G to the domain L(M) and normalize it to obtainwhereOur goal is now to show that our choice of p M minimizesAs Z is independent of p M , it is sufficient to show that our choice of p M minimizesNow consider the expressionBy the usual proof technique with Lagrange multipliers, it is easy to show that our choice of p M in section 5.1, given byfor each \u03c4 = (r a \u2192 s) \u2208 T, is such that it maximizes (30), under the constraint of properness.For \u03c4 \u2208 T and w \u2208 \u03a3 * , we define # \u03c4 (w) to be zero, if w / \u2208 L(M), and otherwise to be the number of occurrences of \u03c4 in the (unique) computation that recognizes w. Formally, # \u03c4 (w) = c,c 1 ((q 0 , w) c\u03c4c (q f , )). We rewrite (30) asWe have already seen that the choice of p M that maximizes (30) is given by (31), andis determined solely by p G and by the condition that p M (w) > 0 for all w such that w \u2208 L(M) and p G (w) > 0. This implies that (30) is maximized by choosing p M such thatis maximized, or alternatively thatis minimized, under the constraint that p M (w) > 0 for all w such that w \u2208 L(M) and p G (w) > 0. For this choice of p M , (29) equals (35). Conversely, if a choice of p M minimizes (29), we may assume that p M (w) > 0 for all w such that w \u2208 L(M) and p G (w) > 0, since otherwise (29) is infinite. Again, for this choice of p M , (29) equals (35). It follows that the choice of p M that minimizes (29) concurs with the choice of p M that maximizes (30), which concludes our proof.",
"cite_spans": [],
"ref_spans": [
{
"start": 1199,
"end": 1209,
"text": "((q 0 , w)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Parsing, volume 1 of The Theory of Parsing, Translation and Compiling",
"authors": [
{
"first": "Alfred",
"middle": [
"V"
],
"last": "Aho",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jeffrey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ullman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bar-Hillel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Yehoshua",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Perles",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shamir",
"suffix": ""
}
],
"year": 1964,
"venue": "Language and Information: Selected Essays on Their Theory and Application",
"volume": "",
"issue": "",
"pages": "116--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aho, Alfred V. and Jeffrey D. Ullman. 1972. Parsing, volume 1 of The Theory of Parsing, Translation and Compiling. Prentice Hall, Englewood Cliffs, NJ. Bar-Hillel, Yehoshua, M. Perles, and E. Shamir. 1964. On formal properties of simple phrase structure grammars. In Yehoshua Bar-Hillel, editor, Language and Information: Selected Essays on Their Theory and Application. Addison-Wesley, Reading, MA, pages 116-150.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "On the complexity of some extensions of RCG parsing",
"authors": [
{
"first": "Eberhard",
"middle": [],
"last": "Bertsch",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"-"
],
"last": "",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Seventh International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "66--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bertsch, Eberhard and Mark-Jan Nederhof. 2001. On the complexity of some extensions of RCG parsing. In Proceedings of the Seventh International Workshop on Parsing Technologies, pages 66-77, Beijing, October.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Applying probabilistic measures to abstract languages",
"authors": [
{
"first": "Taylor",
"middle": [
"L"
],
"last": "Booth",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"A"
],
"last": "Thompson",
"suffix": ""
}
],
"year": 1973,
"venue": "IEEE Transactions on Computers, C",
"volume": "22",
"issue": "5",
"pages": "442--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Booth, Taylor L. and Richard A. Thompson. 1973. Applying probabilistic measures to abstract languages. IEEE Transactions on Computers, C-22(5):442-450.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Range concatenation grammars",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Boullier",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Sixth International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "53--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boullier, Pierre. 2000. Range concatenation grammars. In Proceedings of the Sixth International Workshop on Parsing Technologies, pages 53-64, Trento, Italy, February.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The Berkeley Restaurant Project",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Chuck",
"middle": [],
"last": "Wooters",
"suffix": ""
},
{
"first": "Gary",
"middle": [],
"last": "Tajchman",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Segal",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Fosler",
"suffix": ""
},
{
"first": "Nelson",
"middle": [],
"last": "Morgan",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the International Conference on Spoken Language Processing (ICSLP-94)",
"volume": "",
"issue": "",
"pages": "2139--2142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jurafsky, Daniel, Chuck Wooters, Gary Tajchman, Jonathan Segal, Andreas Stolcke, Eric Fosler, and Nelson Morgan. 1994. The Berkeley Restaurant Project. In Proceedings of the International Conference on Spoken Language Processing (ICSLP-94), pages 2139-2142, Yokohama, Japan.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Recognition can be harder than parsing",
"authors": [
{
"first": "Bernard",
"middle": [],
"last": "Lang",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Intelligence",
"volume": "10",
"issue": "4",
"pages": "486--494",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lang, Bernard. 1994. Recognition can be harder than parsing. Computational Intelligence, 10(4):486-494.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Foundations of Statistical Natural Language Processing",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manning, Christopher D. and Hinrich Sch\u00fctze. 1999. Foundations of Statistical Natural Language Processing. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Finite-state transducers in language and speech processing",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "2",
"pages": "269--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohri, Mehryar. 1997. Finite-state transducers in language and speech processing. Computational Linguistics, 23(2):269-311.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Regular approximation of context-free grammars through transformation",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"-"
],
"last": "",
"suffix": ""
}
],
"year": 2001,
"venue": "Language and Speech Technology. Kluwer Academic",
"volume": "",
"issue": "",
"pages": "153--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohri, Mehryar and Mark-Jan Nederhof. 2001. Regular approximation of context-free grammars through transformation. In J.-C. Junqua and G. van Noord, editors, Robustness in Language and Speech Technology. Kluwer Academic, pages 153-163.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Practical experiments with regular approximation of context-free languages",
"authors": [
{
"first": "Mark",
"middle": [
"-"
],
"last": "Nederhof",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "1",
"pages": "17--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nederhof, Mark-Jan. 2000. Practical experiments with regular approximation of context-free languages. Computational Linguistics, 26(1):17-44.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Probabilistic parsing as intersection",
"authors": [
{
"first": "Mark-Jan",
"middle": [],
"last": "Nederhof",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2003,
"venue": "Laboratoire Lorrain de recherche en informatique et ses applications (LORIA)",
"volume": "",
"issue": "",
"pages": "137--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nederhof, Mark-Jan and Giorgio Satta. 2003. Probabilistic parsing as intersection. In Proceedings of the Eighth International Workshop on Parsing Technologies, pages 137-148, Laboratoire Lorrain de recherche en informatique et ses applications (LORIA), Nancy, France, April.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Introduction to Probabilistic Automata",
"authors": [
{
"first": "Azaria",
"middle": [],
"last": "Paz",
"suffix": ""
}
],
"year": 1971,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paz, Azaria. 1971. Introduction to Probabilistic Automata. Academic Press, New York.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The recognition capacity of local syntactic constraints",
"authors": [
{
"first": "Mori",
"middle": [],
"last": "Rimon",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Herz",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the Fifth Conference of the European Chapter of the ACL",
"volume": "",
"issue": "",
"pages": "155--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rimon, Mori and J. Herz. 1991. The recognition capacity of local syntactic constraints. In Proceedings of the Fifth Conference of the European Chapter of the ACL, pages 155-160, Berlin, April.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Probabilistic grammars and automata",
"authors": [
{
"first": "Eugene",
"middle": [
"S"
],
"last": "Santos",
"suffix": ""
}
],
"year": 1972,
"venue": "Information and Control",
"volume": "21",
"issue": "",
"pages": "27--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Santos, Eugene S. 1972. Probabilistic grammars and automata. Information and Control, 21:27-47.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "On multiple context-free grammars",
"authors": [
{
"first": "Hiroyuki",
"middle": [],
"last": "Seki",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Matsumura",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Fujii",
"suffix": ""
},
{
"first": "Tadao",
"middle": [],
"last": "Kasami",
"suffix": ""
}
],
"year": 1991,
"venue": "Theoretical Computer Science",
"volume": "88",
"issue": "",
"pages": "191--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seki, Hiroyuki, Takashi Matsumura, Mamoru Fujii, and Tadao Kasami. 1991. On multiple context-free grammars. Theoretical Computer Science, 88:191-229.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Abstract Automata",
"authors": [
{
"first": "Peter",
"middle": [
"H"
],
"last": "Starke",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Starke, Peter H. 1972. Abstract Automata. North-Holland, Amsterdam.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Precise N-gram probabilities from stochastic context-free grammars",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Segal",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 32nd Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "74--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke, Andreas and Jonathan Segal. 1994. Precise N-gram probabilities from stochastic context-free grammars. In Proceedings of the 32nd Annual Meeting of the ACL, pages 74-79, Las Cruces, NM, June.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The use of shared forests in tree adjoining grammar parsing",
"authors": [
{
"first": "K",
"middle": [],
"last": "Vijay-Shanker",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weir",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Sixth Conference of the European Chapter of the ACL",
"volume": "",
"issue": "",
"pages": "384--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vijay-Shanker, K. and David J. Weir. 1993. The use of shared forests in tree adjoining grammar parsing. In Proceedings of the Sixth Conference of the European Chapter of the ACL, pages 384-393, Utrecht, The Netherlands, April.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Integration of speech recognition and natural language processing in the MIT Voyager system",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Zue",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Goodine",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Leung",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Phillips",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Polifroni",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Seneff",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the ICASSP-91",
"volume": "1",
"issue": "",
"pages": "713--716",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zue, Victor, James Glass, David Goodine, Hong Leung, Michael Phillips, Joseph Polifroni, and Stephanie Seneff. 1991. Integration of speech recognition and natural language processing in the MIT Voyager system. In Proceedings of the ICASSP-91, Toronto, volume 1, pages 713-716.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "m \u2265 0, and each sequence of states r 0 , . . . , r m \u2208 Q, let the rule \u03c1 \u2229"
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Example of input PCFG G, with rule probabilities between square brackets, input FA M, the reduced PCFG G \u2229 , and the resulting trained PFA."
},
"TABREF0": {
"html": null,
"num": null,
"text": "It can be easily shown that if h(d \u2229 ) = (d, c) and S \u2229 d \u2229 \u21d2 w, then for the same w, we have S d \u21d2 w and (q 0 , w) c (q f , ). Conversely, if for some w, d, and c we have S d \u21d2 w and (q 0 , w) c (q f , ), then there is precisely one derivation d \u2229 such that h(d \u2229 ) = (d, c) and S \u2229",
"content": "<table><tr><td>d \u2229 \u21d2 w.</td></tr><tr><td>It was observed by</td></tr></table>",
"type_str": "table"
}
}
}
}