ACL-OCL / Base_JSON /prefixJ /json /J14 /J14-3008.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J14-3008",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:41:52.385801Z"
},
"title": "Pushdown Automata in Statistical Machine Translation",
"authors": [
{
"first": "Cyril",
"middle": [],
"last": "Allauzen",
"suffix": "",
"affiliation": {},
"email": "allauzen@google.com"
},
{
"first": "Bill",
"middle": [],
"last": "Byrne",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Adri\u00e0",
"middle": [],
"last": "De Gispert",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Gonzalo",
"middle": [],
"last": "Iglesias",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Riley",
"suffix": "",
"affiliation": {},
"email": "riley@google.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This article describes the use of pushdown automata (PDA) in the context of statistical machine translation and alignment under a synchronous context-free grammar. We use PDAs to compactly represent the space of candidate translations generated by the grammar when applied to an input sentence. General-purpose PDA algorithms for replacement, composition, shortest path, and expansion are presented. We describe HiPDT, a hierarchical phrase-based decoder using the PDA representation and these algorithms. We contrast the complexity of this decoder with a decoder based on a finite state automata representation, showing that PDAs provide a more suitable framework to achieve exact decoding for larger synchronous context-free grammars and smaller language models. We assess this experimentally on a large-scale Chinese-to-English alignment and translation task. In translation, we propose a two-pass decoding strategy involving a weaker language model in the first-pass to address the results of PDA complexity analysis. We study in depth the experimental conditions and tradeoffs in which HiPDT can achieve state-of-the-art performance for large-scale SMT.",
"pdf_parse": {
"paper_id": "J14-3008",
"_pdf_hash": "",
"abstract": [
{
"text": "This article describes the use of pushdown automata (PDA) in the context of statistical machine translation and alignment under a synchronous context-free grammar. We use PDAs to compactly represent the space of candidate translations generated by the grammar when applied to an input sentence. General-purpose PDA algorithms for replacement, composition, shortest path, and expansion are presented. We describe HiPDT, a hierarchical phrase-based decoder using the PDA representation and these algorithms. We contrast the complexity of this decoder with a decoder based on a finite state automata representation, showing that PDAs provide a more suitable framework to achieve exact decoding for larger synchronous context-free grammars and smaller language models. We assess this experimentally on a large-scale Chinese-to-English alignment and translation task. In translation, we propose a two-pass decoding strategy involving a weaker language model in the first-pass to address the results of PDA complexity analysis. We study in depth the experimental conditions and tradeoffs in which HiPDT can achieve state-of-the-art performance for large-scale SMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Synchronous context-free grammars (SCFGs) are now widely used in statistical machine translation, with Hiero as the preeminent example (Chiang 2007) . Given an SCFG and an n-gram language model, the challenge is to decode with them, that is, to apply them to source text to generate a target translation.",
"cite_spans": [
{
"start": 135,
"end": 148,
"text": "(Chiang 2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Decoding is complex in practice, but it can be described simply and exactly in terms of the formal languages and relations involved. We will use this description to introduce and analyze pushdown automata (PDAs) for machine translation. This formal description will allow close comparison of PDAs to existing decoders which are based on other forms of automata. Decoding can be described in terms of the following steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "1. Translation: T = \u03a0 2 ({s}\u2022G) The first step is to compose the finite language {s}, which represents the source sentence to be translated, with the algebraic relation G for the translation grammar G. The result of this composition projected on the output side is T , a weighted context-free grammar that contains all possible translations of s under G. Following the usual definition of Hiero grammars, we assume that G does not allow unbounded insertions so that T is a regular language.",
"cite_spans": [
{
"start": 24,
"end": 31,
"text": "({s}\u2022G)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The next step is to compose the result of Step 1 with the weighted regular grammar M defined by the n-gram language model, M. The result of this composition is L, whose paths are weighted by the combined language model and translation scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Application: L = T \u2229 M",
"sec_num": "2."
},
{
"text": "3. Search:l = argmax l\u2208L L The final step is to find the path through L that has the best combined translation and language model score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Application: L = T \u2229 M",
"sec_num": "2."
},
{
"text": "Step 1 that generates T can be performed by a modified CYK algorithm (Chiang 2007) . Our interest is in the different types of automata that can be used to represent T as it is produced by this composition. We focus on three types of representations: hypergraphs (Chiang 2007) , weighted finite state automata (Iglesias et al. 2009a; de Gispert et al. 2010) , and PDAs. We will give a formal definition of PDAs in Section 2, but we will first illustrate and compare these different representations by a simple example. Consider translating a source sentence 's 1 s 2 s 3 ' with a simple Hiero grammar G :",
"cite_spans": [
{
"start": 69,
"end": 82,
"text": "(Chiang 2007)",
"ref_id": "BIBREF7"
},
{
"start": 263,
"end": 276,
"text": "(Chiang 2007)",
"ref_id": "BIBREF7"
},
{
"start": 310,
"end": 333,
"text": "(Iglesias et al. 2009a;",
"ref_id": "BIBREF17"
},
{
"start": 334,
"end": 357,
"text": "de Gispert et al. 2010)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The composition {s} \u2022 G in",
"sec_num": null
},
{
"text": "X\u2192 s 1 , t 2 t 3 S\u2192 X s 2 s 3 , t 1 t 2 X t 4 t 7 S\u2192 X s 2 s 3 , t 1 t 3 X t 6 t 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The composition {s} \u2022 G in",
"sec_num": null
},
{
"text": "Step 1 yields the translations T = {'t 1 t 2 t 2 t 3 t 4 t 7 ' , 't 1 t 3 t 2 t 3 t 6 t 7 '}, and Figure 1 gives examples of the different representations of these translations. We summarize the salient features of these representations as they are used in decoding.",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 106,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The composition {s} \u2022 G in",
"sec_num": null
},
{
"text": "Hypergraphs. As described by Chiang (2007) , a Hiero decoder can generate translations in the form of a hypergraph, as in Figure 1a . As the figure shows, there is a 1:1 correspondence between each production in the CFG and each hyperedge in the hypergraph. Decoding proceeds by intersecting the translation hypergraph with a language model, represented as a finite automaton, yielding L as a hypergraph.",
"cite_spans": [
{
"start": 29,
"end": 42,
"text": "Chiang (2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 122,
"end": 131,
"text": "Figure 1a",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The composition {s} \u2022 G in",
"sec_num": null
},
{
"text": "Step 3 yields a translation by finding the shortest path through the hypergraph L (Huang 2008) .",
"cite_spans": [
{
"start": 82,
"end": 94,
"text": "(Huang 2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The composition {s} \u2022 G in",
"sec_num": null
},
{
"text": "Weighted Finite State Automata (WFSAs) . Because T is a regular language and M is represented by a finite automaton, it follows that T and L can themselves be represented as finite automata. Consequently, Steps 2 and 3 can be solved using weighted finite-state intersection and single-source shortest path algorithms, respectively (Mohri 2009) . This is the general approach adopted in the HiFST decoder (Iglesias et al. 2009a; de Gispert et al. 2010) , which first represents T as a Recursive Transition Network (RTN) and then performs expansion to generate a WFSA. Figure 1b shows the space of translations for this example represented as an RTN. Like the hypergraph, it also has a 1:1 correspondence between each production in the CFG and paths in the RTN components. The recursive RTN itself can be expanded into a single WFSA, as shown in Figure 1c . Intersection and shortest path algorithms are available for both of these WFSAs.",
"cite_spans": [
{
"start": 31,
"end": 38,
"text": "(WFSAs)",
"ref_id": null
},
{
"start": 331,
"end": 343,
"text": "(Mohri 2009)",
"ref_id": "BIBREF27"
},
{
"start": 404,
"end": 427,
"text": "(Iglesias et al. 2009a;",
"ref_id": "BIBREF17"
},
{
"start": 428,
"end": 451,
"text": "de Gispert et al. 2010)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 567,
"end": 576,
"text": "Figure 1b",
"ref_id": "FIGREF0"
},
{
"start": 844,
"end": 853,
"text": "Figure 1c",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The composition {s} \u2022 G in",
"sec_num": null
},
{
"text": "Pushdown Automata. Like WFSAs, PDAs are easily generated from RTNs, as will be described later, and Figure 1d gives the PDA representation for this example. The PDA represents the same language as the FSA, but with fewer states. Procedures to carry out Steps 2 and 3 in decoding will be described in subsequent sections.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 109,
"text": "Figure 1d",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The composition {s} \u2022 G in",
"sec_num": null
},
{
"text": "We will show that PDAs provide a general framework to describe key aspects of several existing and novel translation algorithms. We note that PDAs have long been used to describe parsing algorithms (Aho and Ullman 1972; Lang 1974) , and it is well known that pushdown transducers, the extended version of PDA with input and output labels in each transition, do not have the expressive power needed to generate synchronous context-free languages. For this reason, we do not use PDAs to implement",
"cite_spans": [
{
"start": 198,
"end": 219,
"text": "(Aho and Ullman 1972;",
"ref_id": "BIBREF0"
},
{
"start": 220,
"end": 230,
"text": "Lang 1974)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The composition {s} \u2022 G in",
"sec_num": null
},
{
"text": "Step 1 in decoding: throughout this article a CYK-like parsing algorithm is always used for Step 1. However, we do use PDAs to represent the regular languages produced in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The composition {s} \u2022 G in",
"sec_num": null
},
{
"text": "Step 1 and in the intersection and shortest distance operations needed for Steps 2 and 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The composition {s} \u2022 G in",
"sec_num": null
},
{
"text": "We introduce HiPDT, a hierarchical phrase-based decoder that uses a PDA representation for the target language. The architecture of the system is shown in Figure 2 , where ",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 163,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "HiPDT: Hierarchical Phrase-Based Translation with PDAs",
"sec_num": "1.1"
},
{
"text": "HiPDT versus HiFST: General flow and high-level operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2",
"sec_num": null
},
{
"text": "we contrast it with HiFST (de Gispert et al. 2010) . Both decoders parse the sentence with a grammar G using a modified version of the CYK algorithm to generate the translation search space as an RTN. Each decoder then follows a different path: HiFST expands the RTN into an FSA, intersects it with the language model, and then prunes the result; HiPDT performs the following steps:",
"cite_spans": [
{
"start": 20,
"end": 50,
"text": "HiFST (de Gispert et al. 2010)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2",
"sec_num": null
},
{
"text": "1. Convert the RTN into PDA using the replacement algorithm. The PDA representation for the example grammar in Section 1 is shown in Figure 1 . The algorithm will be described in Section 3.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 133,
"end": 141,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Figure 2",
"sec_num": null
},
{
"text": "2. Apply the language model scores to the PDA by composition. This operation is described in Section 3.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2",
"sec_num": null
},
{
"text": "3. Perform either one of the following operations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2",
"sec_num": null
},
{
"text": "(a) Shortest path through the PDA to get the exact best translation under the model. Shortest distance/path algorithm is described in Section 3.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2",
"sec_num": null
},
{
"text": "(b) Pruned expansion to an FSA. This expansion uses admissible pruning and outputs a lattice. We do this for posterior rescoring steps. The algorithm will be presented in detail in Sections 3.5 and 3.5.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2",
"sec_num": null
},
{
"text": "The principal difference between the two decoders is the point at which finite-state expansion is performed. In HiFST, the RTN representation is immediately expanded to an FSA. In HiPDT, the PDA pruned expansion or shortest path computation is done after the language model is applied, so that all computation is done with respect to both the translation and language model scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2",
"sec_num": null
},
{
"text": "The use of RTNs as an initial translation representation is somewhat influenced by the development history of our FST and SMT systems. RTN algorithms were available in OpenFST at the time HiFST was developed. HiPDT was developed as an extension to HiFST using PDA algorithms, and these have subsequently been included in OpenFST. A possible alternative approach could be to produce a PDA directly by traversing the CYK grid. WFSAs could then be generated by PDA expansion, with a computational complexity in speed and memory usage similar to the RTN-based approach. We present RTNs as the initial translation representation because the generation of RTNs during parsing is straightforward and has been previously presented (de Gispert et al. 2010) . We note, however, that RTN composition is algorithmically more complex than PDA (and FSA) composition, so that RTNs themselves are not ideal representations of T if a language model is to be applied. Composition of PDAs with FSAs will be discussed in Section 3.3. Figure 3 continues the simple translation example from earlier, showing how HiPDT and HiFST both benefit from the compactness offered by WFSA epsilon removal, determinization, and minimization operations. When applied to PDAs, these operations treat parentheses as regular symbols. Compact representations of RTNs are shared by both approaches. Figure 4 illustrates the PDA representation of the translation space under a slightly more complex grammar that includes rules with alternative orderings of nonterminals. The rule S\u2192 X 1 s 2 X 2 , t 1 X 1 X 2 produces the sequence 't 1 t 3 t 4 t 5 t 6 ', and S\u2192 X 1 s 2 X 2 , t 2 X 2 X 1 produces 't 2 t 5 t 6 t 3 t 4 '. The PDA efficiently represents the alternative orderings of the phrases 't 3 t 4 ' and 't 5 t 6 ' allowed under this grammar.",
"cite_spans": [
{
"start": 723,
"end": 747,
"text": "(de Gispert et al. 2010)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 1014,
"end": 1022,
"text": "Figure 3",
"ref_id": null
},
{
"start": 1359,
"end": 1367,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 2",
"sec_num": null
},
{
"text": "In addition to translation, this architecture can also be used directly to carry out source-to-target alignment, or synchronous parsing, under the SCFG in a two-step composition rather than one synchronous parsing stage. For example, by using M as the automata that accepts 't 1 t 2 t 3 t 6 t 7 ', Step 2 will yield all derivations that yield this string as a translation of the source string. This is the approach taken in Iglesias et al. (2009a) and de Gispert et al. (2010) for the RTN/FSA and in Dyer (2010b) for hypergraphs. In Section 4 we analyze how PDAs can be used for alignment.",
"cite_spans": [
{
"start": 424,
"end": 447,
"text": "Iglesias et al. (2009a)",
"ref_id": "BIBREF17"
},
{
"start": 452,
"end": 476,
"text": "de Gispert et al. (2010)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2",
"sec_num": null
},
{
"text": "We summarize here the aims of this article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goals",
"sec_num": "1.2"
},
{
"text": "We will show how PDAs can be used as compact representations of the space T of candidate translations generated by a hierarchical phrase-based SCFG when applied to an input sentence s and intersected with a language model M.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goals",
"sec_num": "1.2"
},
{
"text": "We have described the architecture of HiPDT, a hierarchical phrase-based decoder based on PDAs, and have identified the general-purpose algorithms needed",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goals",
"sec_num": "1.2"
},
{
"text": "0 1 t1 2 t2 3 ( 4 [ 5 t3 6 t5 7 t4 8 t6 9 ) 10 ] ) 11 ] ( [ X\u2192 s 1 , t 3 t 4 X\u2192 s 3 , t 5 t 6 S\u2192 X 1 s 2 X 2 , t 1 X 1 X 2 S\u2192 X 1 s 2 X 2 , t 2 X 2 X 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goals",
"sec_num": "1.2"
},
{
"text": "Example of translation grammar with reordered nonterminals and the PDA representing the result of applying the grammar to input sentence s 1 s 2 s 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "to perform translation and alignment; in doing so we have highlighted the similarities and differences relative to translation with FSAs (Section 1.1). We will provide a formal description of PDAs (Section 2) and present in detail the associated PDA algorithms required to carry out Steps 2 and 3, including RTN replacement, composition, shortest path, expansion, and pruned expansion (Section 3). We will show both theoretically and experimentally that the PDA representation is well suited for exact decoding under a large SCFG and a small language model. An analysis of decoder complexity in terms of the automata used in the representation is presented (Section 3). One important aspect of the translation task is whether the search for the best translation is admissible (or exact) under the translation and language models. Stated differently, we wish to know whether a decoder produces the actual shortest path found or whether some form of pruning might have introduced search errors. In our formulation, we can exclude inadmissible pruning from the shortest-path algorithms, and doing so makes it straightforward to compare the computational complexity of a full translation pipeline using different representations of T (Section 4). We empirically demonstrate that a PDA representation is superior to an FSA representation in the ability to perform exact decoding both in an inversion transduction grammar-style word alignment task and in a translation task with a small language model (Section 4). In these experiments we take HiFST as a contrastive system for HiPDT, but we do not present experimental results with hypergraph representations. Hypergraphs are widely used by the SMT community, and discussions and contrastive experiments between HiFST and cube pruning decoders are available in the literature (Iglesias et al. 2009a; de Gispert et al. 2010 ). We will propose a two-pass translation decoding strategy for HiPDT based on entropy-pruned first-pass language models. Our complexity analysis prompts us to investigate decoding strategies based on large translation grammars and small language models. We describe, implement, and evaluate a two-pass decoding strategy for a large-scale translation task using HiPDT (Section 5). We show that entropy-pruned language models can be used in first-pass translation, followed by admissible beam pruning of the output lattice and subsequent rescoring with a full language model. We analyze the search errors that might be introduced by a two-pass translation approach and show that these can be negligible if pruning thresholds are set appropriately (Section 5.2). Finally, we detail the experimental conditions and speed/performance tradeoffs that allow HiPDT to achieve state-of-the-art performance for largescale SMT under a large grammar (Section 5.3), including lattice rescoring steps under a vast 5-gram language model and lattice minimum Bayes risk decoding (Section 5.4). With this translation strategy HiPDT can yield very good translation performance. For comparison, the performance of this Chinese-to-English SMT described in Section 5.4 is equivalent to that of the University of Cambridge submission to the NIST OpenMT 2012 Evaluation. 1",
"cite_spans": [
{
"start": 1821,
"end": 1844,
"text": "(Iglesias et al. 2009a;",
"ref_id": "BIBREF17"
},
{
"start": 1845,
"end": 1867,
"text": "de Gispert et al. 2010",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "Informally, pushdown transducers are finite-state transducers that have been augmented with a stack. Typically this is done by adding a stack alphabet and labeling each transition with a stack operation (a stack symbol to be pushed onto, popped, or read from the stack) in addition to the usual input and output labels (Aho and Ullman 1972; Berstel 1979 ) and weight (Kuich and Salomaa 1986; Petre and Salomaa 2009) . Our equivalent representation allows a transition to be labeled by a stack operation or a regular input/output symbol, but not both. Stack operations are represented by pairs of open and close parentheses (pushing a symbol on and popping it from the stack). The advantage of this representation is that it is identical to the finite automaton representation except that certain symbols (the parentheses) have special semantics. As such, several finite-state algorithms either immediately generalize to this PDA representation or do so with minimal changes. In this section we formally define pushdown automata and transducers.",
"cite_spans": [
{
"start": 319,
"end": 340,
"text": "(Aho and Ullman 1972;",
"ref_id": "BIBREF0"
},
{
"start": 341,
"end": 353,
"text": "Berstel 1979",
"ref_id": "BIBREF3"
},
{
"start": 367,
"end": 391,
"text": "(Kuich and Salomaa 1986;",
"ref_id": "BIBREF20"
},
{
"start": 392,
"end": 415,
"text": "Petre and Salomaa 2009)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pushdown Automata",
"sec_num": "2."
},
{
"text": "A (restricted) Dyck language consist of \"well-formed\" or \"balanced\" strings over a finite number of pairs of parentheses. Thus the string ( [ ( ) ( ) ] { } [ ] ) ( ) is in the Dyck language over three pairs of parentheses (see Berstel 1979 for a more detailed presentation). More formally, let A and A be two finite alphabets such that there exists a bijection f from A to A. Intuitively, f maps an open parenthesis to its corresponding close parenthesis. Let\u0101 denote f (a) if a \u2208 A and f \u22121 (a) if a \u2208 A. The Dyck language D A over the alphabet A = A \u222a A is then the language defined by the following context-free grammar: S \u2192 \u01eb, S \u2192 SS and S \u2192 aS\u0101 for all a \u2208 A. We define the mapping c A : A * \u2192 A * as follows. c A (x) is the string obtained by iteratively deleting from x all factors of the form a\u0101 with a \u2208 A. Observe that D A = c \u22121 A (\u01eb). Finally, for a subset B \u2286 A, we define the mapping r B :",
"cite_spans": [
{
"start": 227,
"end": 239,
"text": "Berstel 1979",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2.1"
},
{
"text": "A * \u2192 B * by r B (x 1 . . . x n ) = y 1 . . . y n with y i = x i if x i \u2208 B and y i = \u01eb otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2.1"
},
{
"text": "A semiring (K, \u2295, \u2297, 0, 1) is a ring that may lack negation. It is specified by a set of values K, two binary operations \u2295 and \u2297, and two designated values 0 and 1. The operation \u2295 is associative, commutative, and has 0 as identity. The operation \u2297 is associative, has identity 1, distributes with respect to \u2295, and has 0 as annihilator: for all a \u2208 K, a \u2297 0 = 0 \u2297 a = 0. If \u2297 is also commutative, we say that the semiring is commutative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2.1"
},
{
"text": "The probability semiring (R + , +, \u00d7, 0, 1) is used when the weights represent probabilities. The log semiring (R \u222a {\u221e}, \u2295 log , +, \u221e, 0), isomorphic to the probability semiring via the negative-log mapping, is often used in practice for numerical stability. The tropical semiring (R \u222a {\u221e}, min, +, \u221e, 0) is derived from the log semiring using the Viterbi approximation. These three semirings are commutative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2.1"
},
{
"text": "A weighted pushdown automaton (PDA) T over a semiring (K, \u2295, \u2297, 0, 1) is an 8-tuple (\u03a3, \u03a0, \u03a0, Q, E, I, F, \u03c1) where \u03a3 is the finite input alphabet, \u03a0 and \u03a0 are the finite open and close parenthesis alphabets, Q is a finite set of states, I \u2208 Q the initial state, ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2.1"
},
{
"text": "F \u2286 Q the set of final states, E \u2286 Q \u00d7 (\u03a3 \u222a \u03a0 \u222a {\u01eb}) \u00d7 K \u00d7 Q a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2.1"
},
{
"text": "\u2208 F. A path \u03c0 is balanced if r \u03a0 (i[\u03c0]) \u2208 D \u03a0 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2.1"
},
{
"text": "A balanced path \u03c0 accepts the string x \u2208 \u03a3 * if it is a balanced accepting path such that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2.1"
},
{
"text": "r \u03a3 (i[\u03c0]) = x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2.1"
},
{
"text": "The weight associated by T to a string x \u2208 \u03a3 * is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "T(x) = \u03c0\u2208P(x) w[\u03c0]\u2297\u03c1(n[\u03c0])",
"eq_num": "(1)"
}
],
"section": "Definitions",
"sec_num": "2.1"
},
{
"text": "where P(x) denotes the set of balanced paths accepting x. A weighted language is recognizable by a weighted pushdown automaton iff it is context-free. We define the size of T as |T| = |Q|+|E|. A PDA T has a bounded stack if there exists K \u2208 N such that for any path",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c0 from I such that c \u03a0 (r \u03a0 (i[\u03c0])) \u2208 \u03a0 * : |c \u03a0 (r \u03a0 (i[\u03c0]))| \u2264 K",
"eq_num": "(2)"
}
],
"section": "Definitions",
"sec_num": "2.1"
},
{
"text": "In other words, the number of open parentheses that are not closed along \u03c0 is bounded. If T has a bounded stack, then it represents a regular language. Figure 5 shows nonregular, regular, and bounded-stack PDAs. A weighted finite automaton (FSA) can be viewed as a PDA where the open and close parentheses alphabets are empty (see Mohri 2009 for a stand-alone definition). Finally, a weighted pushdown transducer (PDT) T over a semiring (K, \u2295, \u2297, 0, 1) is a 9-tuple (\u03a3, \u2206, \u03a0, \u03a0, Q, E, I, F, \u03c1) where \u03a3 is the finite input alphabet, \u2206 is the finite output alphabet, \u03a0 and \u03a0 are the finite open and close parenthesis alphabets, Q is a finite set of states, I \u2208 Q the initial state, F \u2286 Q the set of final states, for all its transitions. For simplicity, our following presentation focuses on acceptors, rather than the more general case of transducers. This is adequate for the translation applications we describe, with the exception of the treatment of alignment in Section 4.3, for which the intersection algorithm for PDTs and FSTs is given in Appendix A.",
"cite_spans": [
{
"start": 331,
"end": 341,
"text": "Mohri 2009",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 152,
"end": 160,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Definitions",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E \u2286 Q \u00d7 (\u03a3 \u222a \u03a0 \u222a 0 1 a 2 \u03b5 ( 3 ) b 0 1 a 2 \u03b5 ( \u03b5 3 ) \u03b5 b 0 1 ( 3 \u03b5 2 a 4 ( ) 5 b ) (a) (b) (c) 0,\u03b5 1,( \u03b5 3,\u03b5 \u03b5 2,( a 4,( \u03b5 \u03b5 5,( b \u03b5 0 1 a:c/1 2 \u03b5:\u03b5 (:(/1 3 ):) b:c/1 2 0 \u03b5:\u03b5 1 a:c/1 3 S: /1 \u03b5 b:c/1 T S (d) (e)",
"eq_num": "(f)"
}
],
"section": "Definitions",
"sec_num": "2.1"
},
{
"text": "In this section we describe in detail the following PDA algorithms: Replacement, Composition, Shortest Path, and (Pruned) Expansion. Although these are needed to implement HiPDT, these are general purpose algorithms, and suitable for many other applications outside the focus of this article. The algorithms described in this section have been implemented in the PDT extension (Allauzen and Riley 2011) of the OpenFst library (Allauzen et al. 2007) . In this section, in order to simplify the presentation we will only consider machines over the tropical semiring (R + \u222a {\u221e}, min, +, \u221e, 0). However, for each operation, we will specify in which semirings it can be applied.",
"cite_spans": [
{
"start": 426,
"end": 448,
"text": "(Allauzen et al. 2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PDT Operations",
"sec_num": "3."
},
{
"text": "We briefly give formal definitions for RTNs that will be needed to present the RTN expansion operation. Examples are shown earlier in Figures 1(b) and 3(a). Informally, an RTN is an automaton where some labels, nonterminals, are recursively replaced by other automata. We give the formal definition for acceptors; the extension to RTN transducers is straightforward. An RTN R over the tropical semiring",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recursive Transition Networks",
"sec_num": "3.1"
},
{
"text": "(R + \u222a {\u221e}, min, +, \u221e, 0) is a 4-tuple (N, \u03a3, (T \u03bd ) \u03bd\u2208N , S)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recursive Transition Networks",
"sec_num": "3.1"
},
{
"text": "where N is the alphabet of nonterminals, \u03a3 is the input alphabet, (T \u03bd ) \u03bd\u2208N is a family of FSTs with input alphabet \u03a3 \u222a N, and S \u2208 N is the root nonterminal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recursive Transition Networks",
"sec_num": "3.1"
},
{
"text": "A sequence x \u2208 \u03a3 * is accepted by (R, \u03bd) if there exists an accepting path \u03c0 in T \u03bd such that \u03c0 = \u03c0 1 e 1 . . . \u03c0 n e n \u03c0 n+1 with i[\u03c0 k ] \u2208 \u03a3 * , i[e k ] \u2208 N and such that there exists sequences x k such that x k is accepted by (R, i[e k ]) and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recursive Transition Networks",
"sec_num": "3.1"
},
{
"text": "x = i[\u03c0 1 ]x 1 . . . i[\u03c0 n ]x n i[\u03c0 n+1 ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recursive Transition Networks",
"sec_num": "3.1"
},
{
"text": "We say that x is accepted by R when it is accepted by (R, S). The weight associated by (R, \u03bd) (and by R) to x can be defined in the same recursive manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recursive Transition Networks",
"sec_num": "3.1"
},
{
"text": "As an example of testing whether an RTN accepts a sequence, consider the RTN R of Figure 6 and the sequence x = a a b. The path in the automata T S can be written as",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 90,
"text": "Figure 6",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Recursive Transition Networks",
"sec_num": "3.1"
},
{
"text": "\u03c0 = \u03c0 1 e 1 \u03c0 2 , with i[\u03c0 1 ] = a, i[e 1 ] = X 1 , and i[\u03c0 2 ] = b. In addition, the machine (R, i[e 1 ]) accepts x 1 = a. Because x = i[\u03c0 1 ] x 1 i[\u03c0 2 ], it follows that x is accepted by (R, S).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recursive Transition Networks",
"sec_num": "3.1"
},
{
"text": "This algorithm converts an RTN into a PDA. As explained in Section 1.1, this PDT operation is applied by the HiPDT decoder in Step 1, and examples are given in earlier sections (e.g., in figures 1 and 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Replacement",
"sec_num": "3.2"
},
{
"text": "Replacement acts on every transition of the RTN that is associated with a nonterminal. The source and destination states of these transitions are used to define the matched opening and closing parentheses, respectively, in the new PDA. Each RTN nonterminal transition is deleted and replaced by two new transitions that lead to and from the automaton indicated by the nonterminal. These new transitions have matched parentheses, taken from the source and destination states of the RTN transition they replace. Figure 6 gives a simple example.",
"cite_spans": [],
"ref_spans": [
{
"start": 510,
"end": 518,
"text": "Figure 6",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Replacement",
"sec_num": "3.2"
},
{
"text": "Formally, given an RTN R, defined as (N, \u03a3, (T \u03bd ) \u03bd\u2208N , S), its replacement is the PDA T equivalent to R defined by the 8-tuple",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Replacement",
"sec_num": "3.2"
},
{
"text": "(\u03a3, \u03a0, \u03a0, Q, E, I, F, \u03c1) with Q = \u03a0 = \u03bd\u2208N Q \u03bd , I = I S , F = F S , \u03c1 = \u03c1 S , and E = \u03bd\u2208N e\u2208E \u03bd E e where E e = {e} if i[e] \u2208 N and E e = {(p[e], n[e], \u01eb, w[e], I \u00b5 ), (f, n[e], \u01eb, \u03c1 \u00b5 (f ), n[e])|f \u2208 F \u00b5 } (3) with \u00b5 = i[e] \u2208 N otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Replacement",
"sec_num": "3.2"
},
{
"text": "The complexity of the construction is in O(|T|).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Replacement",
"sec_num": "3.2"
},
{
"text": "If |F \u03bd | = 1 for all \u03bd \u2208 N, then |T| = O( \u03bd\u2208N |T \u03bd |) = O(|R|).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Replacement",
"sec_num": "3.2"
},
{
"text": "Creating a superfinal state for each T \u03bd would lead to a T whose size is always linear in the size of R. In this article, we assume this optimization is always performed. We note here that RTNs can be defined and the replacement operation can be applied in any semiring.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Replacement",
"sec_num": "3.2"
},
{
"text": "Once we have created the PDA with translation scores, Step 2 in Section 1.1 applies the language model scores to the translation space. This is done by composition with an FSA containing the relevant language model weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition",
"sec_num": "3.3"
},
{
"text": "The class of weighted pushdown transducers is closed under composition with weighted finite-state transducers (Bar-Hillel, Perles, and Shamir 1964; Nederhof and Satta 2003) . OpenFST supports composition between automata T 1 and T 2 , where T 1 is a weighted pushdown transducer and T 2 is a weighted finite-state transducer. If both T 1 and T 2 are acceptors, rather than transducers, the composition of a PDA and an FSA produces a PDA containing their intersection, and so no separate intersection algorithm is required for these automata. Given this, we describe only the simpler, special case of intersection between a PDA and an FSA, as this is sufficient for most of the translation applications described in this article. The alignment experiments of Conversion of an RTN R to a PDA T by the replacement operation of Section 3.2. Using the notation of Section 2.1, in this example \u03a0 = {3, 5} and \u03a0 = {3,5}, with f (3) =3 and f (5) =5. The unweighted transition (2, X 1 , 3) in R is deleted and replaced by two new transitions (2, 3, 5) and (6,3, 3); similarly, (5, X 2 , 6) is replaced by (5, 6, 7) and (8,6, 6). After application of the r \u03a3 mapping, the strings accepted by R and by T are the same.",
"cite_spans": [
{
"start": 110,
"end": 147,
"text": "(Bar-Hillel, Perles, and Shamir 1964;",
"ref_id": null
},
{
"start": 148,
"end": 172,
"text": "Nederhof and Satta 2003)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Composition",
"sec_num": "3.3"
},
{
"text": "RTN R 1 2 3 4 a X 1 b T S 5 6 X 2 a 7 8 b T X 1 T X 2 R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition",
"sec_num": "3.3"
},
{
"text": "0 1 a b 2 a b 3 a b 4 a b T 2 0 1 a 2 \u03b5 ( 3 ) b T 1 0,0 1,1 a 2,0 \u03b5 0,1 ( 3,0 ) 1,2 a 2,1 \u03b5 b 0,2 ( 3,1 ) 1,3 a 2,2 \u03b5 b 0,3 ( 3,2 ) 1,4 a 2,3 \u03b5 b 0,4 ( 3,3 ) 2,4 \u03b5 b T Figure 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition",
"sec_num": "3.3"
},
{
"text": "Composition example: Composition of a PDA T 1 accepting {a n , b n } with an FSA T 2 accepting {a, b} 4 to produce a PDA T = T 1 \u2229 T 2 . T has only one balanced path, and this path accepts a(a(\u01eb)b)b. Composition is performed by the PDA-FSA intersection described in Section 3.3. Section 4.3 do require composition of transducers; the algorithm for composition of transducers is given in Appendix A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition",
"sec_num": "3.3"
},
{
"text": "An example of composition by intersection is given in Figure 7 . The states of T are created as the product of all the states in T 1 and T 2 . Transitions are added as illustrated in Figure 8 . These correspond to all paths through T 1 and T 2 that can be taken by a synchronized reading of strings from {a, b} * . The algorithm is very similar to the composition algorithm for finite-state transducers, the difference being the handling of the parentheses. The parenthesis-labeled transitions are treated similarly to epsilon transitions, but the parenthesis labels are preserved in the result. This adds many unbalanced paths to T. In this example, T has five paths but only one balanced path, so that T accepts the string a a b b.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 62,
"text": "Figure 7",
"ref_id": null
},
{
"start": 183,
"end": 191,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Composition",
"sec_num": "3.3"
},
{
"text": "Formally, given a PDA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition",
"sec_num": "3.3"
},
{
"text": "T 1 = (\u03a3, \u03a0, \u03a0, Q 1 , E 1 , I 1 , F 1 , \u03c1 1 ) and an FSA T 2 = (\u03a3, Q 2 , E 2 , I 2 , F 2 , \u03c1 2 ), intersection constructs a new PDA T = (\u03a3, \u03a0, \u03a0, Q, E, I, F, \u03c1), where T = T 1 \u2229 T 2 as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition",
"sec_num": "3.3"
},
{
"text": "1. The new state space is in the product of the input state spaces:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition",
"sec_num": "3.3"
},
{
"text": "Q \u2282 Q 1 \u00d7 Q 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition",
"sec_num": "3.3"
},
{
"text": "2. The new initial and final states are I = (I 1 , I 2 ), and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition",
"sec_num": "3.3"
},
{
"text": "F = {(q 1 , q 2 ) : q 1 \u2208 F 1 , q 2 \u2208 F 2 }. 3. Weights are assigned to final states (q 1 , q 2 ) \u2208 Q as \u03c1(q 1 , q 2 ) = \u03c1(q 1 ) + \u03c1(q 2 ). 4. For pairs of transitions (q 1 , a 1 , w 1 , q \u2032 1 ) \u2208 E 1 and (q 2 , a 2 , w 2 , q \u2032 2 ) \u2208 E 2 , a transition is added between states (q 1 , q 2 ) and (q \u2032 1 , q \u2032 2 ) as specified in Figure 8. PDT T 1 FSA T 2 PDT T = T 1 \u2229 T 2 Input Symbols q 1 q \u2032 1 a 1 /w 1 q 2 q \u2032 2 a 2 /w 2 q 1 , q 2 q \u2032 1 , q \u2032 2 a 1 /w 1 + w 2 a 1 \u2208 \u03a3 and a 1 = a 2 q 1 , q 2 q \u2032 1 , q 2 a 1 /w 1 a 1 \u2208 \u03a0 \u222a \u03a0 or a 1 = \u01eb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition",
"sec_num": "3.3"
},
{
"text": "Transitions are added to T if and only if the conditions on the input symbols are satisfied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition",
"sec_num": "3.3"
},
{
"text": "PDA-FSA intersection under the tropical semiring. The PDA T is created by the intersection of the PDA T 1 and the FSA T 2 , i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 8",
"sec_num": null
},
{
"text": "T = T 1 \u2229 T 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 8",
"sec_num": null
},
{
"text": "The intersection algorithm given here assumes that T 2 has no input-\u01eb transitions. When T 2 has input-\u01eb transitions, an epsilon filter (Mohri 2009; Allauzen, Riley, and Schalkwyk 2011) generalized to handle parentheses can be used. Note that Steps 1 and 2 do not require the construction of all possible pairs of states; only those states reachable from the initial state and needed in Step 4 are actually generated. The complexity of the algorithm is in O(|T 1 | |T 2 |) in the worst case, as will be discussed in Section 4. Composition requires the semiring to be commutative.",
"cite_spans": [
{
"start": 135,
"end": 147,
"text": "(Mohri 2009;",
"ref_id": "BIBREF27"
},
{
"start": 148,
"end": 184,
"text": "Allauzen, Riley, and Schalkwyk 2011)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 8",
"sec_num": null
},
{
"text": "With a PDA including both translation and language model weights, HiPDT can extract the best translation (Step 3a in Section 1.1). To this end, a general PDA shortest distance/path algorithm is needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shortest Distance and Path Algorithms",
"sec_num": "3.4"
},
{
"text": "A shortest path in a PDA T is a balanced accepting path with minimal weight and the shortest distance in T is the weight of such a path. We show that when T has a bounded stack, shortest distance and shortest path can be computed in O(|T| 3 log |T|) time (assuming T has no negative weights) and O(|T| 2 ) space. Figure 9 gives a pseudo-code description of the shortest-distance algorithm, which we now discuss.",
"cite_spans": [],
"ref_spans": [
{
"start": 313,
"end": 321,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Shortest Distance and Path Algorithms",
"sec_num": "3.4"
},
{
"text": "SHORTESTDISTANCE(T) 1 for each q \u2208 Q and a \u2208 \u03a0 do 2 B[q, a] \u2190 \u2205 3 for each q \u2208 Q do 4 d[q, q] \u2190 \u221e 5 GETDISTANCE(T, I)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shortest Distance and Path Algorithms",
"sec_num": "3.4"
},
{
"text": "\u22b2 I is the unique initial state 6 return d [I, f ] \u22b2 f is the unique final state ",
"cite_spans": [
{
"start": 43,
"end": 50,
"text": "[I, f ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shortest Distance and Path Algorithms",
"sec_num": "3.4"
},
{
"text": "RELAX(s, q, w, S ) 1 if d[s, q] > w then \u22b2 if w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shortest Distance and Path Algorithms",
"sec_num": "3.4"
},
{
"text": "2 d[s, q] \u2190 w \u22b2 update d[s, q] 3 if q \u2208 S then \u22b2 enqueue q in S if needed 4 ENQUEUE(S, q) GETDISTANCE(T,s) 1 for each q \u2208 Q do 2 d[s, q] \u2190 \u221e 3 d[s, s] \u2190 0 4 S s \u2190 {s} 5 while S s = \u2205 do 6 q \u2190 HEAD(S s ) 7 DEQUEUE(S s ) 8 for each e \u2208 E[q] do \u22b2 E(q) is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "is a better estimate of the distance from s to q",
"sec_num": null
},
{
"text": "PDT shortest distance algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "Given a PDA T = (\u03a3, \u03a0, \u03a0, Q, E, I, F, \u03c1), the GETDISTANCE(T) algorithm computes the shortest distance from the start state I to the final state 2 f \u2208 F. The algorithm recursively calculates d[q, q \u2032 ] \u2208 K -the shortest distance from state q to state q \u2032 along a balanced path At termination, the algorithm returns d [I, f ] as the cost of the shortest path through T.",
"cite_spans": [
{
"start": 316,
"end": 323,
"text": "[I, f ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "The core of the shortest distance algorithm is the procedure GETDISTANCE(T, s) which calculates the distances d [s, q] for all states q that can be reached from s. For an FSA, this procedure is called once, as GETDISTANCE(T, I), to calculate d [I, q] \u2200q.",
"cite_spans": [
{
"start": 112,
"end": 118,
"text": "[s, q]",
"ref_id": null
},
{
"start": 244,
"end": 250,
"text": "[I, q]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "For a PDA, the situation is more complicated. Given a state s in T with at least one incoming open parenthesis transition, we denote by C s the set of states that can be reached by a balanced path starting from s. If s has several incoming open parenthesis transitions, a naive implementation might lead to the states in C s to be visited exponentially many times. This is avoided by memoizing the shortest distance from s to states in C s . To do this, GETDISTANCE(T, ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "e \u2032 = (s \u2032 , a, w \u2032 , q \u2032 ), e \u2032 \u2208 B[s, a] the following holds 3 d[q, q \u2032 ] = w + d[s, s \u2032 ] + w \u2032 (5) If d[s, s \u2032 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "is available, the shortest distance from q to q \u2032 along any balanced path through s can be computed trivially by Equation (5). For any state s with incoming open parenthesis transitions, only a single call to GETDISTANCE(T, s) is needed to precompute the necessary values. Figure 10 gives an example. When transition (2, ( 1 , 0, 5) is processed, GETDISTANCE(T, 5) is called. The distance d [5, 7] is computed, and following transitions are logged: B[5, ( 1 ] \u2190 {(7, ) 1 , 0, 8)} and B[5, ( 2 ] \u2190 {(7, ) 2 , 0, 9)}. Later, when the transition (4, ( 2 , 0, 5) is processed, its matching transition (7, ) 2 , 0, 9) is extracted from B[5, ( 2 ]. The distance d [4, 9] is then found by Equation (5) as d [5, 7] . This avoids redundant re-calculation of distances along the shortest balanced path from state 4 to state 9.",
"cite_spans": [
{
"start": 391,
"end": 394,
"text": "[5,",
"ref_id": null
},
{
"start": 395,
"end": 397,
"text": "7]",
"ref_id": null
},
{
"start": 658,
"end": 661,
"text": "[4,",
"ref_id": null
},
{
"start": 662,
"end": 664,
"text": "9]",
"ref_id": null
},
{
"start": 700,
"end": 703,
"text": "[5,",
"ref_id": null
},
{
"start": 704,
"end": 706,
"text": "7]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 273,
"end": 282,
"text": "Figure 10",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "We now briefly discuss the shortest distance pseudo-code given in Figure 9 . The description may be easier to follow after reading the worked example in Figure 10 . Note that the sets C s are not computed explicitly by the algorithm.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 74,
"text": "Figure 9",
"ref_id": null
},
{
"start": 153,
"end": 162,
"text": "Figure 10",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "The shortest distance calculation proceeds as follows. Self-distances, that is, d [q, q] , are set initially to \u221e; when GETDISTANCE(T, q) is called it sets d [q, q] = 0 to note that q has been visited. GETDISTANCE(T, s) starts a new instance of the shortest-distance algorithm from s using the queue S s , initially containing s. While the queue is not empty, a state is dequeued and its outgoing transitions examined (lines 7-11). Transitions labeled by non-parenthesis are treated as in Mohri (2009) (lines 7-8). When a transition e is labeled by a close parenthesis, e is added to B [s, i[e] ] to indicate that this transition 0 1 2 5 6 7 8 10 3 4 9 t 1 /20",
"cite_spans": [
{
"start": 82,
"end": 88,
"text": "[q, q]",
"ref_id": null
},
{
"start": 158,
"end": 164,
"text": "[q, q]",
"ref_id": null
},
{
"start": 489,
"end": 501,
"text": "Mohri (2009)",
"ref_id": "BIBREF27"
},
{
"start": 586,
"end": 594,
"text": "[s, i[e]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "t 3 /200 ( 2 t 1 /10 t 2 /100 ( 1 t 2 /1 t 3 /1 ) 1 t 4 /1, 000 ) 2 t 6 /1, 000 GETDISTANCE(T) runs 1. Initialization: d[q, q] \u2190 \u221e, \u2200q \u2208 Q 2. GETDISTANCE(T, 0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "is called GETDISTANCE(T, 0) runs 3. Distances are calculated from state 0:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "d[0, 0] \u2190 0; d[0, 1] \u2190 d[0, 0] + w[0, 1]; d[0, 2] \u2190 d[0, 1] + w[1, 2] 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": ". Transition e 1 = (2, ( 1 , 0, 5) is reached. e 1 has symbol i[e 1 ] = ( 1 and destination state n[e 1 ] = 5 5. d [5, 5] = \u221e so GETDISTANCE(T, 5) is called GETDISTANCE(T, 5) runs 6. Distances are calculated from state 5:",
"cite_spans": [
{
"start": 115,
"end": 118,
"text": "[5,",
"ref_id": null
},
{
"start": 119,
"end": 121,
"text": "5]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "d[5, 5] \u2190 0; d[5, 6] \u2190 d[5, 5] + w[5, 6]; d[5, 7] \u2190 d[5, 6] + w[6, 7] 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "The transitions (7, ) 1 , 0, 8) and (7, ) 2 , 0, 9) are reached and memoized Step-by-step description of the shortest distance calculation for the given PDA by the algorithm of Figure 9 . For simplicity, w[q, q \u2032 ] indicates the weight of the transition connecting q and q \u2032 .",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 185,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "B[5, ( 1 ] \u2190 {(7, ) 1 , 0, 8)} B[5, ( 2 ] \u2190 {(7, ) 2 , 0, 9)} GETDISTANCE(T, 5) ends GETDISTANCE(T, 0) resumes 8. Transition e 1 = (2, ( 1 , 0, 5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "balances all incoming open parentheses into s labeled by i[e] (lines 9-10). Finally, if e has an open parenthesis, and if its destination has not already been visited, a new instance of GETDISTANCE is started from n[e] (lines 12-13). The destination states of all transitions balancing e are then relaxed (lines 14-16).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "The space complexity of the algorithm is quadratic for two reasons. First, the number of non-infinity d [q, s] is O(|E|) in the worst case. This last observation also implies that the accumulated number of transitions examined at line 16 is in O(Z|Q| |E| 2 ) in the worst case, where Z denotes the maximal number of times a state is inserted in the queue for a given call of GETDISTANCE. Assuming the cost of a queue operation is \u0393(n) for a queue containing n elements, the worst-case time complexity of the algorithm can then be expressed as O(Z|T| 3 \u0393(|T|)). When T contains no negative weights, using a shortestfirst queue discipline leads to a time complexity in O(|T| 3 log |T|). When all the C s 's are acyclic, using a topological order queue discipline leads to a O(|T| 3 ) time complexity.",
"cite_spans": [
{
"start": 104,
"end": 110,
"text": "[q, s]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "As was shown in Section 3.2, when T has been obtained by converting an RTN or a hypergraph into a PDA, the polynomial dependency in |T| becomes a linear dependency both for the time and space complexities. Indeed, for each q in T, there exists a unique s such that d [s, q] The algorithm can be modified (without changing the complexity) to compute the shortest path by keeping track of parent pointers. The notion of shortest path requires the semiring (K, \u2295, \u2297, 0, 1) to have the path property: for all a, b in K, a \u2295 b \u2208 {a, b}. The shortest-distance operation as presented here and the shortest-path operation can be applied in any semiring having the path property by using the natural order defined by \u2295: a \u2264 b iff a \u2295 b = a. However, the shortest distance algorithm given in Figure 9 can be extended to work for k-closed semirings using the same techniques that were used by Mohri (2002) .",
"cite_spans": [
{
"start": 267,
"end": 273,
"text": "[s, q]",
"ref_id": null
},
{
"start": 882,
"end": 894,
"text": "Mohri (2002)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 782,
"end": 790,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "The shortest distance in the intersection of a string s and a PDA T determines if T recognizes s. PDA recognition is closely related to CFG parsing; a CFG can be represented as a PDT whose input recognizes the CFG and whose output identifies the parse (Aho and Ullman 1972) . Lang (1974) showed that the cubic tabular method of Earley can be naturally applied to PDAs; others give the weighted generalizations (Stolcke 1995; Nederhof and Satta 2006 ). Earley's algorithm has its analogs in the algorithm in Figure 9 : the scan step corresponds to taking a non-parenthesis transition at line 10, the predict step to taking an open parenthesis at lines 14-15, and the complete step to taking the closed parentheses at lines 16-18. Specialization to Translation. Following the formalism of Section 1, we are interested in applying shortest distance and shortest path algorithms to automata created as L = T p \u2229 M, where T p , the translation representation, is a PDA derived from an RTN (via replacement) and M, the language model, is a finite automaton.",
"cite_spans": [
{
"start": 252,
"end": 273,
"text": "(Aho and Ullman 1972)",
"ref_id": "BIBREF0"
},
{
"start": 276,
"end": 287,
"text": "Lang (1974)",
"ref_id": "BIBREF24"
},
{
"start": 410,
"end": 424,
"text": "(Stolcke 1995;",
"ref_id": "BIBREF37"
},
{
"start": 425,
"end": 448,
"text": "Nederhof and Satta 2006",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 507,
"end": 515,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "For this particular case, the time complexity is O(|T p ||M| 3 ) and the space complexity is O(|T p ||M 2 |). The dependence on |T p | is linear, rather than cubic or quadratic. The reasoning is as follows. Given a state q in T p , there exists a unique s q such that q belongs to C s q . Given a state (q 1 , q 2 ) in T p \u2229M, (q 1 , q 2 ) \u2208 C (s 1 ,s 2 ) only if s 1 = s q 1 , and hence (q 1 , q 2 ) belongs to at most |M| components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "As explained in Section 1.1, HiPDT can apply Step 3b to generate translation lattices. This step is typically required for any posterior lattice rescoring strategies. We first describe the unpruned expansion. However, in practice a pruning strategy of some sort is required to avoid state explosion. Therefore, we also describe an implementation of the PDA expansion that includes admissible pruning under a likelihood beam, thus controlling on-the-fly the size of the output lattice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expansion",
"sec_num": "3.5"
},
{
"text": "3.5.1 Full Expansion. Given a bounded-stack PDA T, the expansion of T is the FSA T \u2032 equivalent to T. A simple example is given in Figure 11 .",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 140,
"text": "Figure 11",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Expansion",
"sec_num": "3.5"
},
{
"text": "Expansion starts from the PDA initial state. States and transitions are added to the FSA as the expansion proceeds along paths through the PDA. In the new FSA, parentheses are replaced by epsilons, and as open parentheses are encountered on PDA transitions, they are \"pushed\" into the FSA state labels; in this way the stack depth is maintained along different paths through the PDA. Conversely, when a closing parenthesis is encountered on a PDA path, a corresponding opening parenthesis is \"popped\" from the FSA state label; if this is not possible, for example, as in state (5, \u01eb) in Figure 11 , expansion along that path halts.",
"cite_spans": [],
"ref_spans": [
{
"start": 587,
"end": 596,
"text": "Figure 11",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Expansion",
"sec_num": "3.5"
},
{
"text": "The resulting automata accept the same language. The FSA topology changes, typically with more states and transitions than the original PDA, and the number of added states is controlled only by the maximum stack depth of the PDA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expansion",
"sec_num": "3.5"
},
{
"text": "Formally, suppose the PDA T = (\u03a3, \u03a0, \u03a0, Q, E, I, F, \u03c1) has a maximum stack depth of K. The set of states in its FSA expansion T \u2032 are then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expansion",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Q \u2032 = {(q, z) : q \u2208 Q , z \u2208 \u03a0 * and |z| \u2264 K}",
"eq_num": "(6)"
}
],
"section": "Expansion",
"sec_num": "3.5"
},
{
"text": "and T \u2032 has initial state (I, \u01eb) and final states F \u2032 = {(q, \u01eb) : q \u2208 F}. The condition that T has a bounded stack ensures that Q \u2032 is finite. Transitions are added to T \u2032 as described in Figure 12 . The full expansion operation can be applied to PDA over any semiring. The complexity of the algorithm is linear in the size of T \u2032 . However, the size of T \u2032 can be exponential in the size of T, which motivates the development of pruned expansion, as discussed next.",
"cite_spans": [],
"ref_spans": [
{
"start": 188,
"end": 197,
"text": "Figure 12",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Expansion",
"sec_num": "3.5"
},
{
"text": "0 1 2 3 4 5 6 [ [ [ a ] b ] c 0, \u01eb 1, [ 2, [[ 3, [[ 4, [ 5, [ 6, \u01eb \u01eb \u01eb a \u01eb b \u01eb 2, [ 3, [ \u01eb a c 4, \u01eb 5, \u01eb \u01eb b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expansion",
"sec_num": "3.5"
},
{
"text": "Full expansion of a PDA to an equivalent FSA. The PDA maximum stack depth is 2; therefore the FSA states belong to {0, .., 6} \u00d7 {\u01eb, [, [[}. Expansion can create incomplete paths in the FSA (e.g., corresponding here to the unbalanced PDA path [ a ] b ]); however these are guaranteed to be unconnected, namely, not to lead to a final state. Any unconnected states are removed after expansion.",
"cite_spans": [
{
"start": 132,
"end": 139,
"text": "[, [[}.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": "Transition in PDA T New transition in FSA T \u2032 Conditions Explanation q, z q \u2032 , z a/w a \u2208 \u03a3 \u222a {\u01eb} a is not a parenthesis; stack depth is unchanged q q \u2032 a/w q, z q \u2032 , za \u01eb a \u2208 \u03a0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": "a is an open parenthesis; an epsilon transition is added, and a is \"pushed\" into the destination state, increasing the stack depth",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": "q, z \u2032 a q \u2032 , z \u2032 \u01eb a \u2208 \u03a0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": "a is a closing parenthesis; an epsilon transition is added, and the matching open parenthesis a is \"popped\" from the destination state, decreasing the stack depth Figure 12 PDA Expansion. A states (q, z) and (q \u2032 , z \u2032 ) in the FSA T \u2032 will be connected by a transition if and only if the above conditions hold on the corresponding transition between q and q \u2032 in the PDA T.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 172,
"text": "Figure 12",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Figure 11",
"sec_num": null
},
{
"text": ". Given a bounded-stack PDA T, the pruned expansion of T with threshold \u03b2 is an FST T \u2032 \u03b2 obtained by deleting from T \u2032 all states and transitions that do not belong to any accepting path",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned Expansion",
"sec_num": "3.5.2"
},
{
"text": "\u03c0 in T \u2032 such that w[\u03c0] \u2297 \u03c1[\u03c0] \u2264 d + \u03b2, where d is the shortest distance in T.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned Expansion",
"sec_num": "3.5.2"
},
{
"text": "A naive implementation consisting of fully expanding T and then applying the FST pruning algorithm would lead to a complexity in O(|T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned Expansion",
"sec_num": "3.5.2"
},
{
"text": "\u2032 | log |T \u2032 |) = O(e |T| |T|).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned Expansion",
"sec_num": "3.5.2"
},
{
"text": "Assuming that the reverse T R of T is also bounded-stack, an algorithm whose complexity is in O(|T| |T \u2032 \u03b2 | + |T| 3 log |T|) can be obtained by first applying the shortest distance algorithm from the previous section to T R and then using this to prune the expansion as it is generated. To simplify the presentation, we assume that F = { f } and \u03c1( f ) = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned Expansion",
"sec_num": "3.5.2"
},
{
"text": "The motivation for using reversed automaton in pruning is easily seen by looking at FSAs. For an FSA, the cost of the shortest path through a transition (q, x, w, q \u2032 ) can be stated as d [I, q] [I, q] (i.e., distances from the start state) are computed by the shortest distance algorithm, as discussed in Section 3.4. However, distances of the form d[q \u2032 , f ] are not readily available. To compute these, a shortest distance algorithm is run over the reversed automaton. Reversal preserves states and transitions, but swaps the source and destination state (see Figure 13 for a PDA example). The start state in the reversed machine is f , so that distances are computed from f ; these are denoted d R [f, q] and correspond to d [q, f ] in the original FSA. The cost of the shortest path through an FSA transition (q, x, w, q \u2032 ) can then be computed as d [I, q] ",
"cite_spans": [
{
"start": 188,
"end": 194,
"text": "[I, q]",
"ref_id": null
},
{
"start": 195,
"end": 201,
"text": "[I, q]",
"ref_id": null
},
{
"start": 730,
"end": 737,
"text": "[q, f ]",
"ref_id": null
},
{
"start": 857,
"end": 863,
"text": "[I, q]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 564,
"end": 573,
"text": "Figure 13",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Pruned Expansion",
"sec_num": "3.5.2"
},
{
"text": "+ w + d[q \u2032 , f ]. Distances d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned Expansion",
"sec_num": "3.5.2"
},
{
"text": "+ w + d R [f, q \u2032 ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned Expansion",
"sec_num": "3.5.2"
},
{
"text": "Calculation for PDAs is more complex. Transitions with parentheses must be handled such that distances through them are calculated over balanced paths. For example, if T in Figure 13 was an FSA, the shortest cost of any path through the transition e = (4, ( 2 , 0, 5) could be calculated as d[0, 4] + 0 + d [5, 10] . However, this is not correct, because d [5, 10] , the shortest distance from 5 to 10, is found via a path through the transition (7, ) 1 , 0, 8).",
"cite_spans": [
{
"start": 307,
"end": 310,
"text": "[5,",
"ref_id": null
},
{
"start": 311,
"end": 314,
"text": "10]",
"ref_id": null
},
{
"start": 357,
"end": 360,
"text": "[5,",
"ref_id": null
},
{
"start": 361,
"end": 364,
"text": "10]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 173,
"end": 182,
"text": "Figure 13",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Pruned Expansion",
"sec_num": "3.5.2"
},
{
"text": "Correct calculation of the minimum cost of balanced paths through PDA transitions can be done using quantities computed by the PDA shortest distance algorithm. For a 0 1 2 5 6 7 8 10 3 4 9 t 1 /20",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned Expansion",
"sec_num": "3.5.2"
},
{
"text": "t 3 /200 ( 2 t 1 /10 t 2 /100 ( 1 t 2 /1 t 3 /1 ) 1 t 4 /1, 000 ) 2 t 6 /1, 000 T 0 1 2 5 6 7 8 10 3 4 9 t 1 /20 t 3 /200 ( 2 t 1 /10 t 2 /100 ( 1 t 2 /1 t 3 /1 ) 1 t 4 /1, 000 ) 2 t 6 /1, 000 T R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruned Expansion",
"sec_num": "3.5.2"
},
{
"text": "PDA T and its reverse T R . T R has start state 10, final state 0, \u03a0 R = {) 1 , ) 2 }, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "\u03a0 R = {( 1 , ( 2 }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "PDA transition e = (q, a, w, q \u2032 ), a \u2208 \u03a0, the cost of the shortest balanced path through e can be found as 4 c(e) = d [I, q] ",
"cite_spans": [
{
"start": 119,
"end": 125,
"text": "[I, q]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "+ w[e] + min e \u2032 \u2208B[q \u2032 ,a] d[q \u2032 , p[e \u2032 ]] + w[e \u2032 ] + d R [n[e \u2032 ], f ]",
"eq_num": "(7)"
}
],
"section": "Figure 13",
"sec_num": null
},
{
"text": "where ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "B[q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "= 220 + 0 + 2 + 0 + 1, 000",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "Pruned expansion is therefore able to avoid expanding transitions that would not contribute to any path that would survive pruning. Prior to expansion of a PDA T to an FSA T \u2032 , the shortest distance d in T is calculated. Transitions e = (q, a, w, q \u2032 ), a \u2208 \u03a0, are expanded as transitions e = ((q, z), q, w, (q \u2032 , za)) in T \u2032 only if c(e) \u2264 d + \u03b2, as calculated by Equation (7).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "The pruned expansion algorithm implemented in OpenFST is necessarily more complicated than the simple description given here. Pseudo-code describing the Open-FST implementation is given in Appendix B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "The pruned expansion operation can be applied in any semiring having the path property.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 13",
"sec_num": null
},
{
"text": "We now address the following questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HiPDT Analysis and Experiments: Computational Complexity",
"sec_num": "4."
},
{
"text": "r What are the differences between the FSA and PDA representations as observed in a translation/alignment task?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HiPDT Analysis and Experiments: Computational Complexity",
"sec_num": "4."
},
{
"text": "r How do their respective decoding algorithms perform in relation to the complexity analysis described here?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HiPDT Analysis and Experiments: Computational Complexity",
"sec_num": "4."
},
{
"text": "r How many times is exact decoding achievable in each case?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HiPDT Analysis and Experiments: Computational Complexity",
"sec_num": "4."
},
{
"text": "We will discuss the complexity of both HiPDT and HiFST decoders as well as the hypergraph representation, with an emphasis on Hiero-style SCFGs. We assess our analysis for FSA and PDA representations by contrasting HiFST and HiPDT with large grammars for translation and alignment. For convenience, we refer to the hypergraph representation as T h , and to the FSA and PDA representations as T f and T p .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HiPDT Analysis and Experiments: Computational Complexity",
"sec_num": "4."
},
{
"text": "We first analyze the complexity of each MT step described in the introduction:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HiPDT Analysis and Experiments: Computational Complexity",
"sec_num": "4."
},
{
"text": "1. SCFG Translation: Assuming that the parsing of the input is performed by a CYK parse, then the CFG, hypergraph, RTN, and PDA representations can be generated in O(|s| 3 |G|) time and space (Aho and Ullman 1972) . The FSA representation can require an additional O(e |s| 3 |G| ) time and space because the RTN expansion to FSA can be exponential.",
"cite_spans": [
{
"start": 192,
"end": 213,
"text": "(Aho and Ullman 1972)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HiPDT Analysis and Experiments: Computational Complexity",
"sec_num": "4."
},
{
"text": "The intersection of a CFG T h with a finite automaton M can be performed by the classical Bar-Hillel algorithm (Bar-Hillel, Perles, and Shamir 1964) with time and space complexity O(|T h ||M| l+1 ), where l is the maximum number of symbols on the right-hand side of a grammar rule in T h . Dyer (2010a) presents a more practical intersection algorithm that avoids creating rules that are inaccessible from the start symbol. With deterministic M, the intersection complexity becomes O(|T h ||M| l N +1 ), where l N is the rank of the SCFG (i.e., l N is the maximum number of nonterminals on the right-hand side of a grammar rule). With Hiero-styles rules, l N = 2 so the complexity is O(|T h ||M| 3 ) in that case. 5 The PDA intersection algorithm from Section 3.3 has time and space complexity O(|T p ||M|). Finally, the FSA intersection algorithm has time and space complexity O(|T f ||M|) (Mohri 2009) .",
"cite_spans": [
{
"start": 891,
"end": 903,
"text": "(Mohri 2009)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intersection:",
"sec_num": "2."
},
{
"text": "The shortest path algorithm on the hypergraph, RTN, and FSA representations requires linear time and space (given the underlying acyclicity) (Huang 2008; Mohri 2009) . As presented in Section 3.4, the PDA representation can require time cubic and space quadratic in |M|. Table 1 summarizes the complexity results for SCFGs of rank 2. The PDA representation is equivalent in time and superior in space complexity to the CFG/hypergraph representation, in general, and it can be superior in both space and time to the FSA representation depending on the relative SCFG and language model (LM) sizes. The FSA representation favors smaller target translation grammars and larger language models. In practice, the PDA and FSA representations benefit greatly from the optimizations mentioned previously (Figure 3 and accompanying discussion) . For the FSA representation, these operations can offset the exponential dependencies in the worstcase complexity analysis. For example, in a translation of a 15-word sentence taken at random from the development sets described later, expansion of an RTN yields a WFSA with 174 \u00d7 10 6 states. By contrast, if the RTN is determinized and minimized prior to expansion, the resulting WFSA has only 34 \u00d7 10 3 states. Size reductions of this magnitude are typical. In general, the original RTN, hypergraph, or CFG representation can be exponentially larger than the RTN/PDT optimized as described.",
"cite_spans": [
{
"start": 141,
"end": 153,
"text": "(Huang 2008;",
"ref_id": "BIBREF14"
},
{
"start": 154,
"end": 165,
"text": "Mohri 2009)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 271,
"end": 278,
"text": "Table 1",
"ref_id": "TABREF8"
},
{
"start": 795,
"end": 833,
"text": "(Figure 3 and accompanying discussion)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Shortest Path:",
"sec_num": "3."
},
{
"text": "Although our interest is primarily in Hiero-style translation grammars, which have rank 2 and a relatively small number of nonterminals, this complexity analysis can be extended to other grammars. For SCFGs of arbitrary rank l N , translation complexity in time for hypergraphs becomes O(|G||s| l N +1 |M| l N +1 ); with FSAs the time complexity becomes O(e |G||s| l N +1 |M|); and with PDAs the time complexity becomes O(|G||s| l N +1 |M| 3 ). For more complex SCFGs with rules of rank greater than 2, such as SAMT (Zollmann and Venugopal 2006) or GHKM (Galley et al. 2004) , this suggests that PDA representations may offer computational advantages in the worst case relative to hypergraph representations, although this must be balanced against other available strategies such as binarization (Zhang et al. 2006; Xiao et al. 2009) or scope pruning (Hopkins and Langmead 2010) . Of course, practical translation systems introduce various pruning procedures to achieve much better decoding efficiency than the worst cases given here.",
"cite_spans": [
{
"start": 516,
"end": 545,
"text": "(Zollmann and Venugopal 2006)",
"ref_id": "BIBREF44"
},
{
"start": 554,
"end": 574,
"text": "(Galley et al. 2004)",
"ref_id": "BIBREF12"
},
{
"start": 796,
"end": 815,
"text": "(Zhang et al. 2006;",
"ref_id": "BIBREF43"
},
{
"start": 816,
"end": 833,
"text": "Xiao et al. 2009)",
"ref_id": "BIBREF41"
},
{
"start": 851,
"end": 878,
"text": "(Hopkins and Langmead 2010)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shortest Path:",
"sec_num": "3."
},
{
"text": "We will next describe the translation grammar and language model for our experiments, which will be used throughout the remainder of this article (except when stated otherwise). In the following sections we assess the complexity discussion with a contrast between HiFST (FSA representation) and HiPDT (PDA representation) under large grammars.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shortest Path:",
"sec_num": "3."
},
{
"text": "Translation grammars are extracted from a subset of the GALE 2008 evaluation parallel text; 6 this is 2.1M sentences and approximately 45M words per language. We report translation results on a development set tune-nw (1,755 sentences) and a test set test-nw (1,671 sentences). These contain translations produced by the GALE program and portions of the newswire sections of the NIST evaluation sets MT02 through MT06. 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Grammars and Language Models",
"sec_num": "4.1"
},
{
"text": "Number of n-grams with explicit conditional probability estimates assigned by the 4-gram language models M \u03b8 1 after entropy pruning of M 1 at threshold values \u03b8. Perplexities over the (concatenated) tune-nw reference translations are also reported. The 190 unigrams, which are not removed by pruning. \u03b8 0 7.5 \u00d7 10 \u22129 7.5 \u00d7 10 \u22128 7.5 \u00d7 10 \u22127 7.5 \u00d7 10 \u22126 7.5 \u00d7 10 \u22125 7.5 \u00d7 10 \u22124 7. In tuning the systems, MERT (Och 2003) iterative parameter estimation under IBM BLEU 8 is performed on the development set. The parallel corpus is aligned using MTTK (Deng and Byrne 2008) in both sourceto-target and target-to-source directions. We then follow published procedures (Chiang 2007; Iglesias et al. 2009b) to extract hierarchical phrases from the union of the directional word alignments. We call a translation grammar (G) the set of rules extracted from this process. For reference, the number of rules in G that can apply to the tune-nw is 1.1M, of which 593K are standard non-hierarchical phrases and 511K are strictly hierarchical rules.",
"cite_spans": [
{
"start": 254,
"end": 257,
"text": "190",
"ref_id": null
},
{
"start": 409,
"end": 419,
"text": "(Och 2003)",
"ref_id": "BIBREF29"
},
{
"start": 547,
"end": 568,
"text": "(Deng and Byrne 2008)",
"ref_id": "BIBREF9"
},
{
"start": 662,
"end": 675,
"text": "(Chiang 2007;",
"ref_id": "BIBREF7"
},
{
"start": 676,
"end": 698,
"text": "Iglesias et al. 2009b)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Table 2",
"sec_num": null
},
{
"text": "We will use two English language models in these translation experiments. The first language model, denoted M 1 , is a 4-gram estimated over 1.3B words taken from the target side of the parallel text and the AFP and Xinhua portions of the English Gigaword Fourth Edition (LDC2009T13). We use both Kneser-Ney (Kneser and Ney 1995) and Katz (Katz 1987) smoothing in estimating M 1 . Where language model reduction is required, we apply Stolcke entropy pruning (Stolcke 1998) to M 1 under the relative perplexity threshold \u03b8. The resulting language model is labeled as M \u03b8 1 . The reduction in size in terms of component n-grams is summarized in Table 2 . For aggressive enough pruning, the original 4-gram model can be effectively reduced to a trigram, bigram, or unigram model. For both the Katz and the Kneser-Ney 4-gram language models: at \u03b8 = 7.5E \u2212 05 the number of 4-grams in the LM is effectively reduced to zero; at \u03b8 = 7.5E \u2212 4 the number of 3-grams is effectively 0; and at \u03b8 = 7.5E \u2212 3, only unigrams remain. Development set perplexities increase as entropy pruning becomes more aggressive, with the Katz smoothed model performing better under pruning (Chelba et al. 2010; Roark, Allauzen, and Riley 2013) .",
"cite_spans": [
{
"start": 308,
"end": 329,
"text": "(Kneser and Ney 1995)",
"ref_id": "BIBREF19"
},
{
"start": 334,
"end": 350,
"text": "Katz (Katz 1987)",
"ref_id": "BIBREF18"
},
{
"start": 458,
"end": 472,
"text": "(Stolcke 1998)",
"ref_id": "BIBREF38"
},
{
"start": 1161,
"end": 1181,
"text": "(Chelba et al. 2010;",
"ref_id": "BIBREF6"
},
{
"start": 1182,
"end": 1214,
"text": "Roark, Allauzen, and Riley 2013)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 643,
"end": 650,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table 2",
"sec_num": null
},
{
"text": "We will also use a larger language model, denoted M 2 , obtained by interpolating M 1 with a zero-cutoff stupid-backoff 5-gram model (Brants et al. 2007 ) estimated over 6.6B words of English newswire text; M 2 is estimated as needed for the n-grams required for the test sets. ",
"cite_spans": [
{
"start": 133,
"end": 152,
"text": "(Brants et al. 2007",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Table 2",
"sec_num": null
},
{
"text": "We now compare HiFST and HiPDT in translation with our large grammar G. In this case we know that exact search is often not feasible for HiFST. We run both decoders over tune-nw with a restriction on memory use of 10 GB. If this limit is reached in decoding, the process is killed. 9 Table 3 shows the number of times each decoder succeeds in finding a hypothesis under the memory limit when decoding with various entropy-pruned LMs M \u03b8 1 . With \u03b8 = 7.5 \u00d7 10 \u22129 (row 2), HiFST can only decode 218 sentences, and HiPDT succeeds in 703 cases. The difference in success rates between the decoders is more pronounced as the language model is more aggressively pruned: for \u03b8 = 7.5 \u00d7 10 \u22127 HiPDT succeeds for all but three sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 284,
"end": 291,
"text": "Table 3",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Exact Decoding with Large Grammars and Small Language Models",
"sec_num": "4.2"
},
{
"text": "As Table 3 shows, HiFST fails most frequently in its initial expansion from RTN to FSA; this operation depends only on the translation grammar and does not benefit from any reduction in the language model size. Subsequent intersection of the FSA with the language model can still pose a challenge, although as the language model is reduced, this intersection fails less often. By contrast, HiPDT intersects the translation grammar with the language model prior to expansion and this operation nearly always finishes successfully. The subsequent shortest path (or pruned expansion) operation is prone to failure, but the risk of this can be greatly reduced by using smaller language models.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Exact Decoding with Large Grammars and Small Language Models",
"sec_num": "4.2"
},
{
"text": "In the next section we contrast both HiPDT and HiFST for alignment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exact Decoding with Large Grammars and Small Language Models",
"sec_num": "4.2"
},
{
"text": "We continue to explore applications characterized by large translation grammars G and small language models M. As an extreme instance of a problem involving a large translation grammar and a simple target language model, we consider parallel text alignment under an Inversion Transduction Grammar (ITG) (Wu 1997 ). This task, or something like it, is often done in translation grammar induction. The process should yield the set of derivations, with scores, that generate the target sentence as a translation of the source sentence. In alignment the target language model is extremely simple: It is simply an acceptor for the target language sentence so that |M| is linear in the length of the target sentence. In contrast, the search space needs now to be represented with pushdown transducers (instead of pushdown automata) keeping track of both translations and derivations, that is, indices of the rules in the grammar (Iglesias et al. 2009a; de Gispert et al. 2010; Dyer 2010b) . We define a word-based translation grammar G ITG for the alignment problem as follows. First, we obtain word-to-word translation rules of the form X\u2192 s, t based on probabilities from IBM Model 1 translation tables estimated over the parallel text, where s and t are one source and one target word, respectively (\u223c16M rules). Then, we allow monotonic and inversion transduction of two adjacent nonterminals in the usual ITG style (i.e., add X\u2192 X 1 X 2 , X 1 X 2 and X\u2192 X 1 X 2 , X 2 X 1 ). Additionally, we allow unrestricted source word deletions (X\u2192 s, \u01eb ), and restricted target word insertions (X\u2192 X 1 X 2 , X 1 t X 2 ). This restriction, which is solely motivated by efficiency reasons, disallows the insertion of two consecutive target words. We make no claims about the suitability or appropriateness of this specific grammar for either alignment or translation; we introduce this grammar only to define a challenging alignment task.",
"cite_spans": [
{
"start": 303,
"end": 311,
"text": "(Wu 1997",
"ref_id": "BIBREF40"
},
{
"start": 923,
"end": 946,
"text": "(Iglesias et al. 2009a;",
"ref_id": "BIBREF17"
},
{
"start": 947,
"end": 970,
"text": "de Gispert et al. 2010;",
"ref_id": "BIBREF8"
},
{
"start": 971,
"end": 982,
"text": "Dyer 2010b)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment with Inversion Transduction Grammars",
"sec_num": "4.3"
},
{
"text": "A set of 2,500 sentence pairs of up to 50 source and 75 target words was chosen for alignment. These sentences come from the same Chinese-to-English parallel data described in Section 4.1. Hard limits on memory usage (10GB) and processing time (10 minutes) were imposed for processing each sentence pair. If HiPDT or HiFST exceeded either limit in aligning any sentence pair, alignment was stopped and a \"memory/time failure\" was noted. Even if the resource limits are not exceeded, alignment may fail due to limitations in the grammar. This happens when either a particular word pair rule that is not in our Model 1 table, or more than one consecutive target insertions are needed to reach alignment. In such cases, we record a \"grammar failure,\" as opposed to a \"memory/time failure.\" Results are reported in Table 4 . Of the 2,500 sentence pairs, HiFST successfully aligns only 41% of the sentence pairs under these time and memory constraints. The reason for this low success rate is that HiFST must generate and expand all possible derivations under the ITG for a given sentence pair. Even if it is strictly enforced that the FSA in every CYK cell contains only partial derivations which produce substrings of the target sentence, expansion often exceeds the memory/time constraints. In contrast, HiPDT succeeds in aligning all sentence pairs that can be aligned under the grammar (89%), because it never fails due to memory or time constraints. In this experiment, if alignment is at all possible, HiPDT will find the best derivation. Alignment success rate (or coverage) could trivially be improved by modifying the ITG to allow more consecutive target insertions, or by increasing the number of word-to-word Table 4 Percentages of success and failure in aligning 2,500 sentence pairs under G ITG with HiFST and HiPDT. HiPDT finds an alignment whenever it is possible under the translation grammar.",
"cite_spans": [],
"ref_spans": [
{
"start": 811,
"end": 818,
"text": "Table 4",
"ref_id": null
},
{
"start": 1716,
"end": 1723,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Alignment with Inversion Transduction Grammars",
"sec_num": "4.3"
},
{
"text": "HiPDT Success Failure Success Failure memory/time grammar memory/time grammar 41% 53% 6% 89% 0% 11%",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HiFST",
"sec_num": null
},
{
"text": "rules, but that would not change the conclusion in the contrast between HiFST and HiPDT. The computational analysis from the beginning of this section applies to alignment. The language model M is replaced by an acceptor for the target sentence, and if we assume that the target sentence length is proportional to the source sentence length, it follows that |M| \u221d |s| and the worst-case complexity for HiPDT in alignment mode is O(|s| 6 |G|). This is comparable to ITG alignment (Wu 1997) and the intersection algorithm of Dyer (2010b).",
"cite_spans": [
{
"start": 479,
"end": 488,
"text": "(Wu 1997)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HiFST",
"sec_num": null
},
{
"text": "Our experimental results support the complexity analysis summarized in Table 1 . HiPDT is more efficient in ITG alignment and this is consistent with its linear dependence on the grammar size, whereas HiFST suffers from its exponential dependence. This use of PDAs in alignment does not rely on properties specific either to Hiero or to ITGs. We expect that the approach should be applicable with other types of SCFGs, although we note that alignment under SCFGs with an arbitrary number of nonterminals can be NP-hard (Satta and Peserico 2005) .",
"cite_spans": [
{
"start": 519,
"end": 544,
"text": "(Satta and Peserico 2005)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 71,
"end": 78,
"text": "Table 1",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "HiFST",
"sec_num": null
},
{
"text": "The previous complexity analysis suggests that PDAs should excel when used with large translation grammars and relatively small n-gram language models. In hierarchical phrase-based translation, this is a somewhat unusual scenario: It is far more typical that translation tasks requiring a large translation grammar also require large language models. To accommodate these requirements we have developed a twopass decoding strategy in which a weak version of a large language model is applied prior to the expansion of the PDA, after which the full language model is applied to the resulting WFSA in a rescoring pass. An effective way of generating weak language models is by means of entropy pruning under a threshold \u03b8; these are the language models M \u03b8 1 of Section 4.1. Such a two-pass strategy is widely used in automatic speech recognition (Ljolje, Pereira, and Riley 1999) . The steps in two-pass translation using entropy-pruned language models are given here, and depicted in Figure 14 .",
"cite_spans": [
{
"start": 845,
"end": 878,
"text": "(Ljolje, Pereira, and Riley 1999)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 984,
"end": 993,
"text": "Figure 14",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "HiPDT Two-Pass Translation Architecture and Experiments",
"sec_num": "5."
},
{
"text": "Step 1. We translate with M \u03b8 1 and G using the same parameters obtained by MERT for the baseline system, with the exception that the word penalty parameter is adjusted to produce hypotheses of roughly the correct length. This produces translation lattices that contain hypotheses with exact scores under G and M \u03b8 1 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HiPDT Two-Pass Translation Architecture and Experiments",
"sec_num": "5."
},
{
"text": "\u03a0 2 ({s} \u2022 G) \u2022 M \u03b8 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HiPDT Two-Pass Translation Architecture and Experiments",
"sec_num": "5."
},
{
"text": "Step 2. These translation lattices are pruned at beamwidth \u03b2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HiPDT Two-Pass Translation Architecture and Experiments",
"sec_num": "5."
},
{
"text": "[\u03a0 2 ({s} \u2022 G) \u2022 M \u03b8 1 ] \u03b2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HiPDT Two-Pass Translation Architecture and Experiments",
"sec_num": "5."
},
{
"text": "Step 3. We remove the M \u03b8 1 scores from the pruned translation lattices, reapply the full language model M 1 , and restore the word penalty parameter to the baseline value obtained by MERT. This gives an approximation to \u03a0 2 ({s} \u2022 G) \u2022 M 1 : scores are correctly assigned under G and M 1 , but only hypotheses that survived pruning at Step 2 are included.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HiPDT Two-Pass Translation Architecture and Experiments",
"sec_num": "5."
},
{
"text": "We can rescore the lattices produced by the baseline system or by the two-pass system with the larger language model M 2 . If \u03b2 = \u221e or if \u03b8 = 0, the translation lattices obtained in Step 3 should be identical to lattices produced by the baseline system (i.e., the rescoring step is no longer needed). The aim is to increase \u03b8 to shrink the language model used at Step 1, but \u03b2 will then have to increase accordingly to avoid pruning away desirable hypotheses in Step 2. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HiPDT Two-Pass Translation Architecture and Experiments",
"sec_num": "5."
},
{
"text": "Two-pass HiPDT translation with an entropy pruned language model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 14",
"sec_num": null
},
{
"text": "The two-pass translation procedure requires removal of the weak language model scores used in the initial expansion of the translation search space; this is done so that only the translation scores under G remain after pruning. In the tropical semiring, the weak LM scores can be \"subtracted\" at the path level from the lattice, but this involves a determinization of an unweighted translation lattice, which can be very inefficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Removal of First-Pass Language Model Scores Using Lexicographic Semirings",
"sec_num": "5.1"
},
{
"text": "As an alternative we can define a lexicographic semiring (Shafran et al. 2011 ; Roark, Sproat, and Shafran 2011) w 1 , w 2 over the tropical weights w 1 and w 2 with the operations \u2295 and \u2297:",
"cite_spans": [
{
"start": 57,
"end": 77,
"text": "(Shafran et al. 2011",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Removal of First-Pass Language Model Scores Using Lexicographic Semirings",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w 1 , w 2 \u2295 w 3 , w 4 = w 1 , w 2 if w 1 < w 3 or (w 1 = w 3 and w 2 < w 4 ) w 3 , w 4 otherwise (8) w 1 , w 2 \u2297 w 3 , w 4 = w 1 + w 3 , w 2 + w 4",
"eq_num": "(9)"
}
],
"section": "Efficient Removal of First-Pass Language Model Scores Using Lexicographic Semirings",
"sec_num": "5.1"
},
{
"text": "The PDA algorithms described in Section 3 are valid under this new semiring because it is commutative and has the path property. In particular, the PDA representing {s} \u2022 G is constructed so that the translation grammar score appears in both w 1 and w 2 (i.e., it is duplicated). In the first-pass language model, w 1 has the n-gram language model scores and the w 2 are 0. After composition, the resulting automata have the combined translation grammar score and language model score in the first dimension, and the second dimension contains the translation grammar scores alone. Pruning can be performed under the lexicographic semiring with a threshold set so that only the combined scores in the first dimension are considered. The resulting automata can easily be mapped back into the regular tropical semiring such that only the translation scores in the second dimension are retained (this is a linear operation done by the fstmap operation in the OpenFST library).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient Removal of First-Pass Language Model Scores Using Lexicographic Semirings",
"sec_num": "5.1"
},
{
"text": "We wish to analyze the degree to which the two-pass decoding strategy introduces \"modeling errors\" into translation. A modeling error occurs in two-pass decoding whenever the decoder produces a translation whose score is less than the best attainable under the grammar and language model (i.e., whenever the best possible translation is discarded by pruning at",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Quality and Modeling Errors in Two-Pass Decoding",
"sec_num": "5.2"
},
{
"text": "Step 2). We refer to these as modeling errors, rather than search errors, because they are due to differences in scores assigned by the models M 1 and M \u03b8 1 . Ideally, we would compare the two-pass translation system against a baseline system that performs exact translation, without pruning in search, under the grammar G and language model M 1 . This would allow us to address the following questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Quality and Modeling Errors in Two-Pass Decoding",
"sec_num": "5.2"
},
{
"text": "r Is a two-pass decoding procedure that uses entropy-pruned language models adequate for translation? How many modeling errors are introduced? Does two-pass decoding impact on translation quality?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Quality and Modeling Errors in Two-Pass Decoding",
"sec_num": "5.2"
},
{
"text": "r Which smoothing/discounting technique is best suited for the first-pass language model in two-pass translation, and which smoothing/ discounting technique is best at avoiding modeling errors?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Quality and Modeling Errors in Two-Pass Decoding",
"sec_num": "5.2"
},
{
"text": "Our grammar G is not suitable for these experiments, as we do not have a system capable of exact decoding under both G and M 1 . To create a suitable baseline we therefore reduce G by excluding rules that have a forward translation probability p < 0.01, and refer to this reduced grammar as G small . This process reduces the number of strictly hierarchical rules that apply to our tune-nw set from 511K to 189K, while the number of standard phrases is unchanged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Quality and Modeling Errors in Two-Pass Decoding",
"sec_num": "5.2"
},
{
"text": "Under G small , both HiFST and HiPDT are able to exactly compose the entire space of possible candidate hypotheses with the language model and to extract the shortest path hypothesis. Because an exact decoding baseline is thus available, we can empirically evaluate the proposed two-pass strategy. Any degradation in translation quality can only be due to the modeling errors introduced by pruning under \u03b2 with respect to the entropy-pruned M \u03b8 1 . Figure 15 shows translation performance under grammar G small for different values of entropy pruning threshold \u03b8. Performance is reported after first-pass decoding with M \u03b8 1 (Step 1, Section 5), and after rescoring with M 1 (Step 3, Section 5) the first-pass lattices pruned at alternative \u03b2 beams. The first column reports the baseline for either Kneser-Ney and Katz language models, which are found by translation without entropy pruning, that is, performed with M 1 . Both yield 34.5 on test-nw.",
"cite_spans": [],
"ref_spans": [
{
"start": 449,
"end": 458,
"text": "Figure 15",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Translation Quality and Modeling Errors in Two-Pass Decoding",
"sec_num": "5.2"
},
{
"text": "The first and main conclusion from this figure is that the two-pass strategy is adequate because we are always able to recover the baseline performance. As expected, the harsher the entropy-pruning of M 1 (as we lower \u03b8) the greater \u03b2 must be to recover from the significant degradation in first-pass decoding. But even at a harsh \u03b8 = 7.5 \u00d7 10 \u22127 , when first-pass performance drops over 7 BLEU points, a relatively-low value of \u03b2 = 15 can recover the baseline performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Quality and Modeling Errors in Two-Pass Decoding",
"sec_num": "5.2"
},
{
"text": "Although this is true independently of the LM smoothing approach, a second conclusion from the figure is that the choice of LM smoothing does impact first-pass translation performance. For entropy pruning at \u03b8 = 7.5 \u00d7 10 \u22127 , the Katz LMs perform better for smaller beamwidths \u03b2. These results are consistent with the test set perplexities of the entropy pruned LMs (Table 2) , and are also in line with other studies of Kneser-Ney smoothing and entropy pruning (Chelba et al. 2010; Roark, Allauzen, and Riley 2013) . Modeling errors are reported in Table 5 at the entropy pruning threshold \u03b8 = 7.5 \u00d7 10 \u22127 . As expected, modeling errors decrease as the beamwidth \u03b2 increases, although we find that the language model with Katz smoothing has fewer modeling errors. However, modeling errors do not necessarily impact corpus level BLEU scores. For wide beamwidths (e.g., \u03b2 = 15 here), there are still some modeling errors, but these are either few enough or subtle enough that two-pass decoding under either smoothing method yields the same corpus level BLEU score as the exact decoding baseline.",
"cite_spans": [
{
"start": 462,
"end": 482,
"text": "(Chelba et al. 2010;",
"ref_id": "BIBREF6"
},
{
"start": 483,
"end": 515,
"text": "Roark, Allauzen, and Riley 2013)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 366,
"end": 375,
"text": "(Table 2)",
"ref_id": null
},
{
"start": 550,
"end": 557,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Translation Quality and Modeling Errors in Two-Pass Decoding",
"sec_num": "5.2"
},
{
"text": "Two-pass translation modeling errors as a function of RTN expansion pruning threshold \u03b2. A modeling error occurs whenever the score of a hypothesis produced by the two-pass translation differs from the score found by the exact baseline system. Errors are tabulated over systems reported in Figure 15 , at \u03b8 = 7.5 \u00d7 10 \u22127 . r How do these compare against the predicted computational complexity?",
"cite_spans": [],
"ref_spans": [
{
"start": 290,
"end": 299,
"text": "Figure 15",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Table 5",
"sec_num": null
},
{
"text": "In this section we turn back to the original large grammar, for which HiFST cannot perform exact decoding (see Table 3 ). In contrast, HiPDT is able to do exact decoding so we study tradeoffs in speed and translation performance. The speed of two-pass decoding can be increased by decreasing \u03b2 and/or increasing \u03b8, but at the risk of degradation in translation performance. For grammar G and language model M 1 we plot in Figure 16 the BLEU score against speed as a function of \u03b2 for a selection of \u03b8 values. BLEU score is measured over the entire test set test-nw but speed is calculated only on sentences of length up to 20 words (\u223c500 sentences). In computing speed we measure not only the PDA operations, but the entire HiPDT decoding process described in Figure 14 , including CYK parsing and the application of M 1 . We note in passing that these unusually slow decoding speeds are a consequence of the large grammars, language models, and broad pruning thresholds chosen for these experiments; in practice, translation with either HiPDT or HiFST is much faster.",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 118,
"text": "Table 3",
"ref_id": "TABREF10"
},
{
"start": 422,
"end": 431,
"text": "Figure 16",
"ref_id": "FIGREF0"
},
{
"start": 760,
"end": 769,
"text": "Figure 14",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Table 5",
"sec_num": null
},
{
"text": "In these experiments we find that the language model entropy pruning threshold \u03b8 and the likelihood beamwidth \u03b2 work together to balance speed against translation quality. For every entropy pruning threshold \u03b8 value considered, there is a value of \u03b2 for which there is no degradation in translation quality. For example, suppose we want to attain a translation quality of 34.5 BLEU: then \u03b2 should be set to 12 or greater. If the goal is to find the fastest system at this level, then we choose \u03b8 = 7.5 \u00d7 10 \u22125 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 5",
"sec_num": null
},
{
"text": "The interaction between pruning in expansion and pruning of the language model is explained by Figure 17 , where decoding and rescoring times are shown for various",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 104,
"text": "Figure 17",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Table 5",
"sec_num": null
},
{
"text": "HiPDT translation quality versus speed (decoding with G, M \u03b8 1 + rescoring with M 1 ) under different entropy pruning thresholds \u03b8 and for likelihood beamwidths \u03b2 = 15, 12, 9, 8, 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 16",
"sec_num": null
},
{
"text": "Accumulated decoding+rescoring times for HiPDT under different entropy pruning thresholds, reaching a performance of at least 34.5 BLEU, for which \u03b2 is set to 12.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 17",
"sec_num": null
},
{
"text": "values of \u03b8 and \u03b2 that achieve at least the translation quality target of 34.5. As \u03b8 increases, decoding time decreases because a smaller language model is easier to apply; however, rescoring times increase, because the larger values of \u03b2 lead to larger WFSAs after expansion, and these are costly to rescore. The balance occurs at \u03b8 = 7.5 \u00d7 10 \u22125 and a translation rate of 3.0 words/sec. In this case, entropy pruning yields a severely shrunken bigram language model, but this may vary depending on the translation grammar and the original, unpruned LM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 17",
"sec_num": null
},
{
"text": "r Does the HiPDT two-pass decoding generate lattices that can be useful in rescoring?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring with 5-Gram Language Models and LMBR Decoding",
"sec_num": "5.4"
},
{
"text": "We now report on rescoring experiments using WFSAs produced by the two-pass HiPDT translation system under the large translation grammar G. We demonstrate that HiPDT can be used to generate large, compact representations of the translation space that are suitable for rescoring with large language models or by alternative decoding procedures. We investigate translation performance by applying versions of the language model M 2 estimated with stupid backoff. We also investigate minimum Bayes risk (MBR) decoding (Kumar and Byrne 2004) as an alternative search strategy. We are particularly interested in lattice MBR (LMBR) (Tromble et al. 2008) , which is well suited for the large WFSAs that the system can generate; we use the implementation described by . There are two parameters to be tuned: a scaling parameter to normalize the evidence scores and a word penalty applied to the hypotheses space; these are tuned jointly on the tune-nw set. Results are reported in Figure 18 .",
"cite_spans": [
{
"start": 515,
"end": 537,
"text": "(Kumar and Byrne 2004)",
"ref_id": "BIBREF21"
},
{
"start": 626,
"end": 647,
"text": "(Tromble et al. 2008)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 973,
"end": 982,
"text": "Figure 18",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Rescoring with 5-Gram Language Models and LMBR Decoding",
"sec_num": "5.4"
},
{
"text": "We note first that rescoring with the large language model M 2 , which is effectively interpolated with M 1 , gives consistent gains over initial results obtained with M 1 alone. After 5-gram rescoring there is already +0.5 BLEU improvement compared with G small . With a richer translation grammar we have generated a richer lattice that allows gains to be gotten by our lattice rescoring techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring with 5-Gram Language Models and LMBR Decoding",
"sec_num": "5.4"
},
{
"text": "HiPDT decoding with G. Decoding language model M \u03b8 1 and first pass rescoring language model M 1 are Katz. Results on test-nw are given for ML-Decoding under the 5-gram stupid backoff language model ('5gML') and for LMBR and for LMBR decoding. Parameter values are \u03b2 = 15, 12, 9, 8 and \u03b8 = 7.5 \u00d7 10 \u22127 , 7.5 \u00d7 10 \u22125 , 7.5 \u00d7 10 \u22123 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 18",
"sec_num": null
},
{
"text": "We also find that BLEU scores degrade smoothly as \u03b2 decreases and the expansion pruning beamwidth narrows, and at all values of \u03b2 LMBR gives improvement over the MAP hypotheses. Because LMBR relies on posterior distributions over n-grams, we conclude that HiPDT is able to generate compact representations of large search spaces with posteriors that are robust to pruning conditions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 18",
"sec_num": null
},
{
"text": "Finally, we find that increasing \u03b8 degrades performance quite smoothly for \u03b2 \u2265 9. Again, with appropriate choices of \u03b8 and \u03b2 we can easily reach a compromise between decoding speed and final performance of our HiPDT system. For instance, with \u03b8 = 7.5 \u00d7 10 \u22127 and \u03b2 = 12, for which we decode at a rate of 3 words/sec as seen in Figure 16 , we are losing only 0.5 BLEU after LMBR compared to \u03b8 = 7.5 \u00d7 10 \u22127 and \u03b2 = 15.",
"cite_spans": [],
"ref_spans": [
{
"start": 327,
"end": 336,
"text": "Figure 16",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Figure 18",
"sec_num": null
},
{
"text": "There is extensive prior work on computational efficiency and algorithmic complexity in hierarchical phrase-based translation. The challenge is to find algorithms that can be made to work with large translation grammars and large language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6."
},
{
"text": "Following the original algorithms and analysis of Chiang (2007) , Huang and Chiang (2007) developed the cube-growing algorithm, and more recently Huang and Mi (2010) developed an incremental decoding approach that exploits the left-to-right nature of n-gram language models.",
"cite_spans": [
{
"start": 50,
"end": 63,
"text": "Chiang (2007)",
"ref_id": "BIBREF7"
},
{
"start": 66,
"end": 89,
"text": "Huang and Chiang (2007)",
"ref_id": "BIBREF15"
},
{
"start": 146,
"end": 165,
"text": "Huang and Mi (2010)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6."
},
{
"text": "Search errors in hierarchical translation, and in translation more generally, have not been as extensively studied; this is undoubtedly due to the difficulties inherent in finding exact translations for use in comparison. Using a relatively simple phrase-based translation grammar, Iglesias et al. (2009b) compared search via cube-pruning to an exact FST implementation (Kumar, Deng, and Byrne 2006) and found that cube-pruning suffered significant search errors. For Hiero translation, an extensive comparison of search errors between the cube pruning and FSA implementation was presented by Iglesias et al. (2009a) and de Gispert et al. (2010) . The effect of search errors has also been studied in phrase-based translation by Zens and Ney (2008) . Relaxation techniques have also recently been shown to find exact solutions in parsing (Koo et al. 2010) , phrasebased SMT (Chang and Collins 2011) , and in tree-to-string translation under trigram language models (Rush and Collins 2011) ; this prior work involved much smaller grammars and languages models than have been considered here.",
"cite_spans": [
{
"start": 370,
"end": 399,
"text": "(Kumar, Deng, and Byrne 2006)",
"ref_id": "BIBREF23"
},
{
"start": 593,
"end": 616,
"text": "Iglesias et al. (2009a)",
"ref_id": "BIBREF17"
},
{
"start": 621,
"end": 645,
"text": "de Gispert et al. (2010)",
"ref_id": "BIBREF8"
},
{
"start": 729,
"end": 748,
"text": "Zens and Ney (2008)",
"ref_id": "BIBREF42"
},
{
"start": 838,
"end": 855,
"text": "(Koo et al. 2010)",
"ref_id": "BIBREF20"
},
{
"start": 874,
"end": 898,
"text": "(Chang and Collins 2011)",
"ref_id": "BIBREF5"
},
{
"start": 965,
"end": 988,
"text": "(Rush and Collins 2011)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6."
},
{
"text": "Efficiency in synchronous parsing with Hiero grammars and hypergraphs has been studied previously by Dyer (2010b) , who showed that a single synchronous parsing algorithm (Wu 1997 ) can be significantly improved upon in practice through hypergraph compositions. We developed similar procedures for our HiFST decoder (Iglesias et al. 2009a; de Gispert et al. 2010 ) via a different route, after noting that with the space of translations represented as WFSAs, alignment can be performed using operations over WFSTs (Kumar and Byrne 2005) .",
"cite_spans": [
{
"start": 101,
"end": 113,
"text": "Dyer (2010b)",
"ref_id": "BIBREF11"
},
{
"start": 171,
"end": 179,
"text": "(Wu 1997",
"ref_id": "BIBREF40"
},
{
"start": 302,
"end": 339,
"text": "HiFST decoder (Iglesias et al. 2009a;",
"ref_id": null
},
{
"start": 340,
"end": 362,
"text": "de Gispert et al. 2010",
"ref_id": "BIBREF8"
},
{
"start": 514,
"end": 536,
"text": "(Kumar and Byrne 2005)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6."
},
{
"text": "Although entropy-pruned language models have been used to produce real-time translation systems (Prasad et al. 2007) , we believe our use of entropy-pruned language models in two-pass translation to be novel. This is an approach that is widely used in automatic speech recognition (Ljolje, Pereira, and Riley 1999) and we note that it relies on efficient representation of very large search spaces T for subsequent rescoring, as is possible with FSAs and PDAs.",
"cite_spans": [
{
"start": 96,
"end": 116,
"text": "(Prasad et al. 2007)",
"ref_id": "BIBREF31"
},
{
"start": 281,
"end": 314,
"text": "(Ljolje, Pereira, and Riley 1999)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6."
},
{
"text": "In this article, we have described a novel approach to hierarchical machine translation using pushdown automata. We have presented fundamental PDA algorithms including composition, shortest-path, (pruned) expansion, and replacement and have shown how these can be used in PDA-based machine translation decoding and how this relates to and compares with hypergraph and FSA-based decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "On the basis of the experimental results presented in the previous sections, we can now address the questions laid out in Sections 4 and 5: r A two-pass translation decoding procedure in which translation is first performed with a weak entropy-pruned language model and followed by admissible likelihood-based pruning and rescoring with a full language model can yield good quality translations. Translation performance does not degrade significantly unless the first-pass language model is very heavily pruned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "r As predicted by the analysis of algorithmic complexity, intersection and expansion algorithms based on the PDA representation are able to perform exact decoding with large translation and weak language models. By contrast, RTN to FSA expansion fails with large translation grammars, regardless of the size of the language model. With large translation grammars, language model composition prior to expansion may be more attractive than expansion prior to language model composition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "r Our experimental results suggest that for a translation grammar and a language model of a particular size, and given a value of language model entropy pruning threshold \u03b8, there is a value of the pruned expansion parameter \u03b2 for which there is no degradation in translation quality with HiPDT. This makes exact decoding under large translation grammars possible. The values of \u03b8 and \u03b2 will be grammar-and task-dependent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "r Although there is some interaction between parameter tuning, pruning thresholds, and language modeling strategies, the variation is not significant enough to indicate that a particular language model or smoothing technique is best. This is particularly true if minimum Bayes risk decoding is applied to the output translation lattices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "Several questions naturally arise about the decoding strategies presented here. One is whether inadmissible pruning methods can be applied to the PDA-based systems that are analogous to those used in current hypergraph-based systems such as cube-pruning (Chiang 2007) . Another is whether a hybrid PDA-FSA system, where some parts of the PDA are pre-expanded and some not, could provide benefits over full pre-expansion (FSA) or none (PDA). We leave these questions for future work.",
"cite_spans": [
{
"start": 254,
"end": 267,
"text": "(Chiang 2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "PRUNEDEXPANSION(T, \u03b2) 1 (d R , B R ) \u2190 SHORTESTDISTANCE (T R ) 2 \u03bb \u2190 d R [I, f ] + \u03b2 \u22b2 Compute the pruning threshold 3 B \u2190 REVERSE (B R )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "\u22b2 Compute the balance information in T from the one in T R 4 (I \u2032 , f \u2032 ) \u2190 ((I, \u01eb), ( f, \u01eb)) \u22b2 I \u2032 and f \u2032 are the initial and final states of the pruned expansion 1 \u22b2 Returns true iff a path from (q, z) to (q \u2032 , z) with weight w belongs to an accepting path below threshold 2 w I \u2190 d [(q, z) ] + w \u22b2 Shortest distance from I to (q \u2032 , z) when taking a path from (q, z) , z) ] \u2190 w F 9 if \u03bb < w I + w F then \u22b2 w I + w F : min. weight of an accepting path taking a path of weight w from (q, z) to (q \u2032 , z) 10 return false 11 PROCESSSTATE ((q \u2032 , z)) 12 return true PROCESSSTATE((q, z)) 1 if (q, z) \u2208 Q \u2032 then \u22b2 If state (q, z) does not exist yet, create it and add it to the queue 2 Q \u2032 \u2190 Q \u2032 \u222a {(q, z)} 3 ENQUEUE (S, (q, z))",
"cite_spans": [
{
"start": 287,
"end": 294,
"text": "[(q, z)",
"ref_id": null
},
{
"start": 365,
"end": 371,
"text": "(q, z)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 372,
"end": 376,
"text": ", z)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "5 (F \u2032 , \u03c1 \u2032 ( f \u2032 )) \u2190 ({ f \u2032 }, 0) 6 (d[I \u2032 ], s[I \u2032 ]) \u2190 (0, I \u2032 ) 7 (d[I \u2032 ], d[ f \u2032 ]) \u2190 (d R [I, f ], 0) 8 (z D , D[ f ]) \u2190 (\u01eb, 0) 9 S \u2190 Q \u2032 \u2190 {I \u2032 }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "PDT pruned expansion algorithm. We assume that F = { f } and \u03c1( f ) = 0 to simplify the presentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 19",
"sec_num": null
},
{
"text": "where (q s , z) = s [(q, z) ]. This implies that assuming when (q, z) is visited, d[(n[e \u2032 ], z \u2032 )] is known; we then have all the required information for deciding whether e should be pruned or retained. In order to ensure that each state is visited once, we need to ensure that d [(q, z) ] is known when (q, z) is visited so we can apply an A * queue discipline among the states sharing the same stack. Both conditions can be achieved by using a queue discipline defined by a partial order \u227a such that z is a prefix of z \u2032 \u21d2 (q, z) \u227a (q \u2032 , z \u2032 ) (B.12)",
"cite_spans": [
{
"start": 20,
"end": 27,
"text": "[(q, z)",
"ref_id": null
},
{
"start": 283,
"end": 290,
"text": "[(q, z)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 19",
"sec_num": null
},
{
"text": "d [(q, z) ]",
"cite_spans": [
{
"start": 2,
"end": 9,
"text": "[(q, z)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 19",
"sec_num": null
},
{
"text": "+ d[(q, z)] < d[(q \u2032 , z)] + d[(q \u2032 , z)] \u21d2 (q, z) \u227a (q \u2032 , z) (B.13)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 19",
"sec_num": null
},
{
"text": "We also assume that all states sharing the same stack will be dequeued consecutively (z = z \u2032 \u2192 for all (q, q \u2032 ), (q, z) \u227a (q \u2032 , z \u2032 ) or for all (q, q \u2032 ), (q \u2032 , z \u2032 ) \u227a (q, z)). This allows us to cache some computations (the D data structure as described subsequently). The pseudo code of the algorithm is given in Figure 19 . First, the shortest distance algorithm is applied to T R and the absolute pruning threshold is computed accordingly (lines 1-2). The resulting balanced data information is then reversed (line 3). The initial and final states are created (lines 4-5) and the d, d, and D data structures are initialized accordingly (lines 6-8). The default value in these data structures is assumed to be \u221e. The queue is initialized containing the initial state (line 9).",
"cite_spans": [],
"ref_spans": [
{
"start": 320,
"end": 329,
"text": "Figure 19",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Figure 19",
"sec_num": null
},
{
"text": "The state (q, z) at the head of the queue is dequeued (lines 10-12). If (q, z) admits an incoming open-parenthesis transition, B contains the balance information for that state and D can be updated accordingly (lines 13-18).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 19",
"sec_num": null
},
{
"text": "If e is a regular transition, the resulting transition ((q, z), i[e], o[e], w[e], (n[e], z)) in T \u2032 can be pruned using the criterion derived from Equation (B.11). If it is retained, the transition is created as well as its destination state (n[e], z) if needed (lines 20-22).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 19",
"sec_num": null
},
{
"text": "If e is an open-parenthesis transition, each balanced path starting by the resulting transition in T \u2032 and ending by a close-parenthesis transition is treated as a metatransition and pruned using the same criterion as regular transitions (lines 23-29). If any of these meta-transitions is retained, the transition ((q, z), \u01eb, \u01eb, w[e], (n[e], zi[e])) resulting from e is created as well as its destination state (n[e], zi[e]) if needed (lines 30-35) .",
"cite_spans": [
{
"start": 435,
"end": 448,
"text": "(lines 30-35)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 19",
"sec_num": null
},
{
"text": "If e is a closed-parenthesis transition, it is created if it belongs to a balanced path below the threshold (lines 36-39).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 19",
"sec_num": null
},
{
"text": "Finally, the resulting transducer T \u2032 \u03b2 is returned (line 40).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 19",
"sec_num": null
},
{
"text": "For details see http://www.nist.gov/itl/iad/mig/openmt12.cfm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For simplicity, we assume T has only one final state. 3 This assumes all paths from q to q \u2032 pass through s. The RELAX operation(Figure 9) handles the general case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that d[p[e \u2032 ], q \u2032 ] could be replaced by d R [q \u2032 , p[e \u2032 ]].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The modified Bar-Hillel construction described byChiang (2007) has time and space complexity O(|T h ||M| 4 ); the modifications were introduced presumably to benefit the subsequent pruning method employed (but seeHuang, Zhong, & Gildea 2005).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See http://projects.ldc.upenn.edu/gale/data/catalog.html. We excluded the UN material and the LDC2002E18, LDC2004T08, LDC2007E08, and CUDonga collections. 7 See http://www.itl.nist.gov/iad/mig/tests/mt/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See ftp://jaguar.ncsl.nist.gov/mt/resources/mteval-v13.pl.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the UNIX ulimit command. The experiment was carried out over machines with different configurations and loads, so these numbers should be considered as approximate values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7-ICT-2009-4) under grant agreement number 247762, and was supported in part by the GALE program of the Defense Advanced Research Projects Agency, contract no. HR0011-06-C-0022, and a May 2010 Google Faculty Research Award.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Given a pair (T 1 , T 2 ) where T 1 is a weighted pushdown transducer and the T 2 is a weighted finite-state transducer, and such that T 1 has input and output alphabets \u03a3 and \u2206 and T 2 has input and output alphabets \u2206 and \u0393, then there exists a weighted pushdown transducer T 1 \u2022 T 2 , which is the composition of T 1 and T 2 , such that for all (x, y) \u2208 \u03a3 * \u00d7 \u0393 * :We also assume that T 2 has no input-\u01eb transitions, noting that for T 2 with input-\u01eb transitions, an epsilon filter (Mohri 2009 ; Allauzen, Riley, and Schalkwyk 2011) generalized to handle parentheses could be used. A state in T is a pair (q 1 , q 2 ) where q 1 is a state of T 1 and q 2 a state of T 2 . Given a transition e 1 = (q 1 , a, b, w 1 , q \u2032 1 ) in T 1 , transitions out of (q 1 , q 2 ) in T are obtained using the following rules. If b \u2208 \u2206, then e 1 can be matched with a transition (q 2 , b, c, wIf b = \u01eb, then e 1 is matched with staying in q 2 resulting in a transition ((q 1 , q 2 ), a, \u01eb, w 1 , (q \u2032 1 , q 2 )). Finally, if b = a \u2208 \u03a0, e 1 is also matched with staying in q 2 , resulting in a transition ((q 1 , q 2 ), a, a, w 1 , (q \u2032 1 , q 2 )) in T. The initial state is (I 1 , I 2 ) and a state (q 1 , q 2 ) in T is final when both q 1 and q 2 are both final. Weight values are assigned as \u03c1((q 1 , q 2 )) = \u03c1 1 (q 1 ) + \u03c1 2 (q 2 ).",
"cite_spans": [
{
"start": 483,
"end": 494,
"text": "(Mohri 2009",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A. Composition of a Weighted PDT and a Weighted FST",
"sec_num": null
},
{
"text": "Let d R and B R be the data structures computed by the shortest-distance algorithm applied to T R . For a state q in T \u2032 (or equivalently T \u2032 \u03b2 ), let d [q] denote the shortest distance from the initial state to q, d [q] denote the shortest distance from q to the final state, and s[q] denote the destination state of the last unbalanced open-parenthesis transition on a shortest path from the initial state to q.The algorithm is based on the following property: Letting e denote a transition in T \u2032 such that p[e] = (q, z) and z = z \u2032 a, the weight of a shortest path through e can be expressed as:d [(q, z) ",
"cite_spans": [
{
"start": 153,
"end": 156,
"text": "[q]",
"ref_id": null
},
{
"start": 217,
"end": 220,
"text": "[q]",
"ref_id": null
},
{
"start": 601,
"end": 608,
"text": "[(q, z)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix B. Pruned Expansion",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Theory of Parsing, Translation and Compiling",
"authors": [
{
"first": "Alfred",
"middle": [
"V"
],
"last": "Aho",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"D"
],
"last": "Ullman",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aho, Alfred V. and Jeffrey D. Ullman. 1972. The Theory of Parsing, Translation and Compiling, volume 1-2. Prentice-Hall.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "OpenFst: A general and efficient weighted finite-state transducer library",
"authors": [
{
"first": "Cyril",
"middle": [],
"last": "Allauzen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Riley",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Schalkwyk",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Blois",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Allauzen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Riley",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Schalkwyk",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Skut",
"suffix": ""
},
{
"first": "Mehryar",
"middle": [],
"last": "Mohri ; Hillel",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perles",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shamir",
"suffix": ""
}
],
"year": 1964,
"venue": "Language and Information: Selected Essays on their Theory and Application",
"volume": "6482",
"issue": "",
"pages": "116--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allauzen, Cyril, Michael Riley, and Johan Schalkwyk. 2011. Filters for efficient composition of weighted finite-state transducers. In Proceedings of CIAA, volume 6482 of LNCS, pages 28-38. Blois. Allauzen, Cyril, Michael Riley, Johan Schalkwyk, Wojciech Skut, and Mehryar Mohri. 2007. OpenFst: A general and efficient weighted finite-state transducer library. In Proceedings of CIAA, pages 11-23. http://www.openfst.org. Bar-Hillel, Y., M. Perles, and E. Shamir. 1964. On formal properties of simple phrase structure grammars. In Y. Bar-Hillel, editor, Language and Information: Selected Essays on their Theory and Application. Addison-Wesley, pages 116-150.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Transductions and Context-Free Languages",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Berstel",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berstel, Jean. 1979. Transductions and Context-Free Languages. Teubner.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Efficient path counting transducers for minimum Bayes-risk decoding of statistical machine translation lattices",
"authors": [
{
"first": "Graeme",
"middle": [],
"last": "Blackwood",
"suffix": ""
},
{
"first": "Adri\u00e0",
"middle": [],
"last": "De Gispert",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the ACL: Short Papers",
"volume": "",
"issue": "",
"pages": "858--867",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blackwood, Graeme, Adri\u00e0 de Gispert, and William Byrne. 2010. Efficient path counting transducers for minimum Bayes-risk decoding of statistical machine translation lattices. In Proceedings of the ACL: Short Papers, pages 27-32, Uppsala. Brants, Thorsten, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proceedings of EMNLP-ACL, pages 858-867, Prague.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Exact decoding of phrase-based translation models through lagrangian relaxation",
"authors": [
{
"first": "Yin-Wen",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "26--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang, Yin-Wen and Michael Collins. 2011. Exact decoding of phrase-based translation models through lagrangian relaxation. In Proceedings of EMNLP, pages 26-37, Edinburgh.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Study on interaction between entropy pruning and Kneser-Ney smoothing",
"authors": [
{
"first": "Ciprian",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Neveitt",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of Interspeech",
"volume": "2",
"issue": "",
"pages": "242--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chelba, Ciprian, Thorsten Brants, Will Neveitt, and Peng Xu. 2010. Study on interaction between entropy pruning and Kneser-Ney smoothing. In Proceedings of Interspeech, pages 2,242-2,245, Makuhari.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiang, David. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201-228.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Hierarchical phrase-based translation with weighted finite state transducers and shallow-n grammars",
"authors": [
{
"first": "",
"middle": [],
"last": "De Gispert",
"suffix": ""
},
{
"first": "Gonzalo",
"middle": [],
"last": "Adri\u00e0",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Iglesias",
"suffix": ""
},
{
"first": "Eduardo",
"middle": [
"R"
],
"last": "Blackwood",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Banga",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "3",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "de Gispert, Adri\u00e0, Gonzalo Iglesias, Graeme Blackwood, Eduardo R. Banga, and William Byrne. 2010. Hierarchical phrase-based translation with weighted finite state transducers and shallow-n grammars. Computational Linguistics, 36(3):201-228.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "HMM word and phrase alignment for statistical machine translation",
"authors": [
{
"first": "Yonggang",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2008,
"venue": "IEEE Transactions on Audio, Speech, and Language Processing",
"volume": "16",
"issue": "3",
"pages": "494--507",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deng, Yonggang and William Byrne. 2008. HMM word and phrase alignment for statistical machine translation. IEEE Transactions on Audio, Speech, and Language Processing, 16(3):494-507.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A Formal Model of Ambiguity and its Applications in Machine Translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dyer, Chris. 2010a. A Formal Model of Ambiguity and its Applications in Machine Translation. Ph.D. thesis, University of Maryland.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Two monolingual parses are better than one (synchronous parse)",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "263--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dyer, Chris. 2010b. Two monolingual parses are better than one (synchronous parse). In Proceedings of NAACL-HLT, pages 263-266, Los Angeles, CA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "What's in a translation rule",
"authors": [
{
"first": "M",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "273--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Galley, M., M. Hopkins, K. Knight, and D. Marcu. 2004. What's in a translation rule. In Proceedings of HLT-NAACL, pages 273-280, Boston, MA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "SCFG decoding without binarization",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Langmead",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "646--655",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hopkins, M. and G. Langmead. 2010. SCFG decoding without binarization. In Proceedings of EMNLP, pages 646-655, Cambridge, MA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Advanced dynamic programming in semiring and hypergraph frameworks",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, Liang. 2008. Advanced dynamic programming in semiring and hypergraph frameworks. In Proceedings of COLING, pages 1-18, Manchester.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Forest rescoring: Faster decoding with integrated language models",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "144--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, Liang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proceedings of ACL, pages 144-151, Prague.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Efficient incremental decoding for tree-to-string translation",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "273--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, Liang and Haitao Mi. 2010. Efficient incremental decoding for tree-to-string translation. In Proceedings of EMNLP, pages 273-283, Cambridge, MA.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Hierarchical phrase-based translation with weighted finite state transducers",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Vancouver",
"suffix": ""
},
{
"first": "Gonzalo",
"middle": [],
"last": "Iglesias",
"suffix": ""
},
{
"first": "Adri\u00e0",
"middle": [],
"last": "De Gispert",
"suffix": ""
},
{
"first": "Eduardo",
"middle": [
"R"
],
"last": "Banga",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth International Workshop on Parsing Technology, Parsing '05",
"volume": "",
"issue": "",
"pages": "380--388",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, Liang, Hao Zhang, and Daniel Gildea. 2005. Machine translation as lexicalized parsing with hooks. In Proceedings of the Ninth International Workshop on Parsing Technology, Parsing '05, pages 65-73, Vancouver. Iglesias, Gonzalo, Adri\u00e0 de Gispert, Eduardo R. Banga, and William Byrne. 2009a. Hierarchical phrase-based translation with weighted finite state transducers. In Proceedings of NAACL-HLT, pages 433-441, Boulder, CO. Iglesias, Gonzalo, Adri\u00e0 de Gispert, Eduardo R. Banga, and William Byrne. 2009b. Rule filtering by pattern for efficient hierarchical translation. In Proceedings of EACL, pages 380-388, Athens.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Estimation of probabilities from sparse data for the language model component of a speech recognizer",
"authors": [
{
"first": "Slava",
"middle": [
"M"
],
"last": "Katz",
"suffix": ""
}
],
"year": 1987,
"venue": "IEEE Transactions on Acoustics, Speech, and Signal Processing",
"volume": "35",
"issue": "3",
"pages": "400--401",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katz, Slava M. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech, and Signal Processing, 35(3):400-401.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improved backing-off for m-gram language modeling",
"authors": [
{
"first": "Reinhard",
"middle": [],
"last": "Kneser",
"suffix": ""
},
{
"first": "Herman",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of ICASSP",
"volume": "1",
"issue": "",
"pages": "181--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kneser, Reinhard and Herman Ney. 1995. Improved backing-off for m-gram language modeling. In Proceedings of ICASSP, volume 1, pages 181-184, Detroit, MI.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Dual decomposition for parsing with non-projective head automata",
"authors": [
{
"first": "",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Terry",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Rush",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Jaakkola",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Cambridge",
"suffix": ""
},
{
"first": "Werner",
"middle": [],
"last": "Kuich",
"suffix": ""
},
{
"first": "Arto",
"middle": [],
"last": "Salomaa",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of EMNLP",
"volume": "1",
"issue": "",
"pages": "288--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koo, Terry, Alexander M. Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In Proceedings of EMNLP, pages 1,288-1,298, Cambridge, MA. Kuich, Werner and Arto Salomaa. 1986. Semirings, automata, languages. Springer.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Minimum Bayes-risk decoding for statistical machine translation",
"authors": [
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "169--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, Shankar and William Byrne. 2004. Minimum Bayes-risk decoding for statistical machine translation. In Proceedings of HLT-NAACL, pages 169-176, Boston, MA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Local phrase reordering models for statistical machine translation",
"authors": [
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of EMNLP-HLT",
"volume": "",
"issue": "",
"pages": "161--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, Shankar and William Byrne. 2005. Local phrase reordering models for statistical machine translation. In Proceedings of EMNLP-HLT, pages 161-168, Rochester, NY.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A weighted finite state transducer translation template model for statistical machine translation",
"authors": [
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Yonggang",
"middle": [],
"last": "Shankar",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2006,
"venue": "Natural Language Engineering",
"volume": "12",
"issue": "1",
"pages": "35--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, Shankar, Yonggang Deng, and William Byrne. 2006. A weighted finite state transducer translation template model for statistical machine translation. Natural Language Engineering, 12(1):35-75.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Deterministic techniques for efficient non-deterministic parsers",
"authors": [
{
"first": "Bernard",
"middle": [],
"last": "Lang",
"suffix": ""
}
],
"year": 1974,
"venue": "Proceedings of ICALP",
"volume": "",
"issue": "",
"pages": "255--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lang, Bernard. 1974. Deterministic techniques for efficient non-deterministic parsers. In Proceedings of ICALP, pages 255-269, Saarbr\u00fccken.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Efficient general lattice generation and rescoring",
"authors": [
{
"first": "Andrej",
"middle": [],
"last": "Ljolje",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Riley",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of Eurospeech",
"volume": "1",
"issue": "",
"pages": "251--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ljolje, Andrej, Fernando Pereira, and Michael Riley. 1999. Efficient general lattice generation and rescoring. In Proceedings of Eurospeech, pages 1,251-1,254, Budapest.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Semiring frameworks and algorithms for shortest-distance problems",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Automata, Languages and Combinatorics",
"volume": "7",
"issue": "",
"pages": "321--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohri, Mehryar. 2002. Semiring frameworks and algorithms for shortest-distance problems. Journal of Automata, Languages and Combinatorics, 7:321-350.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Weighted automata algorithms",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "213--254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohri, Mehryar. 2009. Weighted automata algorithms. In M. Drosde, W. Kuick, and H. Vogler, editors, Handbook of Weighted Automata. Springer, chapter 6, pages 213-254.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Probabilistic parsing as intersection",
"authors": [
{
"first": "Mark-Jan",
"middle": [],
"last": "Nederhof",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of 8th International Workshop on Parsing Technologies",
"volume": "53",
"issue": "",
"pages": "406--436",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nederhof, Mark-Jan and Giorgio Satta. 2003. Probabilistic parsing as intersection. In Proceedings of 8th International Workshop on Parsing Technologies, pages 137-148, Nancy. Nederhof, Mark-Jan and Giorgio Satta. 2006. Probabilistic parsing strategies. Journal of the ACM, 53(3):406-436.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och, Franz J. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL, pages 160-167, Sapporo.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Algebraic systems and pushdown automata",
"authors": [
{
"first": "Ion",
"middle": [],
"last": "Petre",
"suffix": ""
},
{
"first": "Arto",
"middle": [],
"last": "Salomaa",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "257--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petre, Ion and Arto Salomaa. 2009. Algebraic systems and pushdown automata. In M. Drosde, W. Kuick, and H. Vogler, editors, Handbook of Weighted Automata. Springer, chapter 7, pages 257-289.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Real-time speech-to-speech translation for PDAs",
"authors": [
{
"first": "R",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Krstovski",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Saleem",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Natarajan",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Decerbo",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Stallard",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of IEEE International Conference on Portable Information Devices",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prasad, R., K. Krstovski, F. Choi, S. Saleem, P. Natarajan, M. Decerbo, and D. Stallard. 2007. Real-time speech-to-speech translation for PDAs. In Proceedings of IEEE International Conference on Portable Information Devices, pages 1-5, Orlando, FL.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Smoothed marginal distribution constraints for language modeling",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Allauzen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Riley",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "43--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roark, Brian, Cyril Allauzen, and Michael Riley. 2013. Smoothed marginal distribution constraints for language modeling. In Proceedings of ACL, pages 43-52, Sofia.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Lexicographic semirings for exact automata encoding of sequence models",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
},
{
"first": "Izhak",
"middle": [],
"last": "Shafran",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL-HLT",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roark, Brian, Richard Sproat, and Izhak Shafran. 2011. Lexicographic semirings for exact automata encoding of sequence models. In Proceedings of ACL-HLT, pages 1-5, Portland, OR.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Exact decoding of syntactic translation models through lagrangian relaxation",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL-HLT",
"volume": "",
"issue": "",
"pages": "72--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rush, Alexander M. and Michael Collins. 2011. Exact decoding of syntactic translation models through lagrangian relaxation. In Proceedings of ACL-HLT, pages 72-82, Portland, OR.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Some computational complexity results for synchronous context-free grammars",
"authors": [
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
},
{
"first": "Enoch",
"middle": [],
"last": "Peserico",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of HLT-EMNLP",
"volume": "",
"issue": "",
"pages": "803--810",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satta, Giorgio and Enoch Peserico. 2005. Some computational complexity results for synchronous context-free grammars. In Proceedings of HLT-EMNLP, pages 803-810, Vancouver.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Efficient determinization of tagged word lattices using categorial and lexicographic semirings",
"authors": [
{
"first": "Izhak",
"middle": [],
"last": "Shafran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
},
{
"first": "Mahsa",
"middle": [],
"last": "Yarmohammadi",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ASRU",
"volume": "",
"issue": "",
"pages": "283--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shafran, Izhak, Richard Sproat, Mahsa Yarmohammadi, and Brian Roark. 2011. Efficient determinization of tagged word lattices using categorial and lexicographic semirings. In Proceedings of ASRU, pages 283-288, Honolulu, HI.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "An efficient probabilistic context-free parsing algorithm that computes prefix probabilities",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Linguistics",
"volume": "21",
"issue": "2",
"pages": "165--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke, Andreas. 1995. An efficient probabilistic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics, 21(2):165-201.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Entropy-based pruning of backoff language models",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of DARPA Broadcast News Transcription and Understanding Workshop",
"volume": "",
"issue": "",
"pages": "270--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke, Andreas. 1998. Entropy-based pruning of backoff language models. In Proceedings of DARPA Broadcast News Transcription and Understanding Workshop, pages 270-274, Landsdowne, VA.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Lattice minimum Bayes-risk decoding for statistical machine translation",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "620--629",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tromble, Roy, Shankar Kumar, Franz J. Och, and Wolfgang Macherey. 2008. Lattice minimum Bayes-risk decoding for statistical machine translation. In Proceedings of EMNLP, pages 620-629, Edinburgh.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "",
"pages": "377--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, Dekai. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23:377-403.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Better synchronous binarization for machine translation",
"authors": [
{
"first": "Tong",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dongdong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jingbo",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "362--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao, Tong, Mu Li, Dongdong Zhang, Jingbo Zhu, and Ming Zhou. 2009. Better synchronous binarization for machine translation. In Proceedings of EMNLP, pages 362-370, Singapore.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Improvements in dynamic programming beam search for phrase-based statistical machine translation",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of IWSLT",
"volume": "",
"issue": "",
"pages": "195--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zens, Richard and Hermann Ney. 2008. Improvements in dynamic programming beam search for phrase-based statistical machine translation. In Proceedings of IWSLT, pages 195-205, Honolulu, HI.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Synchronous binarization for machine translation",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "256--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, Hao, Liang Huang, Daniel Gildea, and Kevin Knight. 2006. Synchronous binarization for machine translation. In Proceedings of HLT-NAACL, pages 256-263, New York, NY.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Syntax augmented machine translation via chart parsing",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Zollmann",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Venugopal",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of NAACL Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "138--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zollmann, Andreas and Ashish Venugopal. 2006. Syntax augmented machine translation via chart parsing. In Proceedings of NAACL Workshop on Statistical Machine Translation, pages 138-141, New York, NY.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Alternative representations of the regular language of possible translation candidates. Valid paths through the PDA must have balanced parentheses.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Optimized representations of the regular language of possible translation candidates.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "PDA Examples: (a) Non-regular PDA accepting {a n b n |n \u2208 N}. (b) Regular (but not bounded-stack) PDA accepting a * b * . (c) Bounded-stack PDA accepting a * b * and (d) its expansion as an FSA. (e) Weighted PDT T 1 over the tropical semiring representing the weighted transduction (a n b n , c 2n ) \u2192 3n and (f) equivalent RTN ({S}, {a, b}, {c}, {T S }, S).",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "accepts a a b and a b b. accepts a 3 a3 b and a 3 6 b63 b.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF4": {
"text": "Figure 6 Conversion of an RTN R to a PDA T by the replacement operation of Section 3.2. Using the notation of Section 2.1, in this example \u03a0 = {3, 5} and \u03a0 = {3,5}, with f (3) =3 and f (5) =5. The unweighted transition (2, X 1 , 3) in R is deleted and replaced by two new transitions (2, 3, 5) and (6,3, 3); similarly, (5, X 2 , 6) is replaced by (5, 6, 7) and (8,6, 6). After application of the r \u03a3 mapping, the strings accepted by R and by T are the same.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF5": {
"text": "\u2032 , a] and d[p[e \u2032 ], q \u2032 ] are computed by the PDA shortest distance algorithm over T, and d R [n[e \u2032 ], f ] is computed by the PDA shortest distance algorithm over T R .InFigure 13, the shortest cost of paths through the transition e = (4, ( 2 , 0, 5) is found as follows: the shortest distance algorithm over T calculates d[0, 4] = 220 , d[5, 7] = 2, and B[5, ( 2 ] = {7, ) 2 , 0, 9}; the shortest distance algorithm over T R calculates d R[10, 9] = 1, 000 (trivially, here); the cost of the shortest path through e is d[0, 4] + w[e] + d[5, 7] + w[e \u2032 ] + d R[10, 9]",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF6": {
"text": "Results (lower case IBM BLEU scores over test-nw) under G small with various M \u03b8 1 as obtained with several values of \u03b8. Performance in subsequent rescoring with M 1 after likelihood-based pruning of the resulting translation lattices for various \u03b2 is also reported. In the pipeline, M 1 (and M \u03b8 1 ) are estimated with either Katz or Kneser-Ney smoothing.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF7": {
"text": "if r then \u22b2 If any of the paths considered above are below threshold31 E \u2032 \u2190 E \u2032 \u222a {((q, z), \u01eb, \u01eb, w[e], (n[e], z \u2032 ))} 32 PROCESSSTATE ((n[e], z \u2032 )) 33 s[(n[e], z \u2032 )] \u2190 (n[e], z \u2032 ) 34 d[(n[e], z \u2032 )] \u2190 min(d[(n[e], z \u2032 )], d[(q, z)] + w[e]) 35 d[(n[e], z \u2032 )] \u2190 min(d[(n[e], z \u2032 )], w F )36 elseif i[e] \u2208 \u03a0 and c \u03a0 (zi[e]) \u2208 \u03a0 * then \u22b2 If i[e] is the close parenthesis matching the top of the stack 37 z \u2032 \u2190 c \u03a0 (zi[e]) 38 if d[(q, z)] + w[e] + d[(n[e], z \u2032 )] \u2264 \u03bb then 39E \u2032 \u2190 E \u2032 \u222a {((q, z), \u01eb, \u01eb, w[e], (n[e], z \u2032 ))} 40 return (\u03a3, \u2206, \u03a0, \u03a0, Q \u2032 , E \u2032 , I \u2032 , F \u2032 , \u03c1 \u2032 ) RETAINPATH(q, z, w, q \u2032 )",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"text": "path \u03c0 is a sequence of transitions \u03c0 = e 1 . . . e n such that n[e i ] = p[e i+1 ] for 1 \u2264 i < n. We then define p[\u03c0] = p[e 1 ], n[\u03c0] = n[e n ], i[\u03c0] = i[e 1 ] \u2022 \u2022 \u2022 i[e n ], and w[\u03c0] = w[e 1 ] \u2297 . . . \u2297 w[e n ]. A path \u03c0 is accepting if p[\u03c0] = I and n[\u03c0]",
"content": "<table><tr><td>finite set of transitions,</td></tr><tr><td>and \u03c1 : F \u2192 K the final weight function. Let e = (p[e], i[e], w[e], n[e]) denote a transition</td></tr><tr><td>in E; for simplicity, (p[e], i[e], n[e]) denotes an unweighted transition (i.e., a transition</td></tr><tr><td>with weight1).</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF2": {
"text": "{\u01eb}) \u00d7 K \u00d7 Q a finite set of transitions, and \u03c1 : F \u2192 K the final weight function. Let",
"content": "<table><tr><td>e = (p[e], i[e], o[e], w[e], n[e]) denote a transition in E. Note that a PDA can be seen as</td></tr><tr><td>a particular case of a PDT where i[e] = o[e]</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF4": {
"text": "s) calculates d[s, s \u2032 ] for all s \u2032 \u2208 C s , and it also constructs sets of transitions B[s, a] = {e \u2208 E : p[e] \u2208 C s and i[e] = a} \u2200a \u2208 \u03a0",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF5": {
"text": "is still being processed, with p[e 1 ] = 2, n[e 1 ] = 5, and i[e 1 ] = ( 1 9. Transition e 2 = (7, ) 1 , 0, 8) matching ( 1 is extracted from B[n[e 1 ], i[e 1 ]], with p[e 2 ] =",
"content": "<table><tr><td>7</td></tr><tr><td>and n[e 2 ] = 8 10. Distance d[0, 8] is calculated as d[0, n[e 2 ]] : d[0, n[e 2 ]] \u2190 d[0, p[e 1 ]] + w[p[e 1 ], n[e 1 ]] + d[n[e 1 ], p[e 2 ]] + w[p[e 2 ], n[e 2 ]] 10. Processing of e 1 finishes, and calculation of distances from 0 continues: d[0, 10] \u2190 d[0, 8] + w[8, 10]</td></tr><tr><td>10 is a final state. Processing continues with transition (0, t 1 , 20, 3) d[0, 3] \u2190 d[0, 0] + w[0, 3]; d[0, 4] \u2190 d[0, 3] + w[3, 4]</td></tr><tr><td>13. Transition e 3 = (4, ( 2 , 0, 5) is reached e 3 has symbol i[e 3 ] = ( 2 , source state p[e 3 ] = 4, and destination state n[e 3 ] = 5 14. GETDISTANCE(T, 5) is not called, since d[5, 5] = 0 indicates state 5 has been previously</td></tr><tr><td>visited</td></tr><tr><td>15. Transition e 4 = (7, ) 2 , 0, 9) matching ( 2 is extracted from B[n[e 3 ], i[e 3 ]], with p[e 4 ] = 7 and n[e 4 ] = 9 16. Distance d[0, 9] is calculated as d[0, n[e 4 ]], using cached values: d[0, n[e 4 ]] \u2190 d[0, p[e 3 ]] + w[p[e 3 ], n[e 3 ]] + d[n[e 3 ], p[e 4 ]] + w[p[e 4 ], n[e 4 ]] 17. d[0, 10] is less than \u221e :</td></tr><tr><td>d[0, 10] \u2190 min(d[0, 10], d[0, 9] + w[9, 10])</td></tr><tr><td>18. GETDISTANCE(T, 0) ends and returns d[0, 10]</td></tr><tr><td>GETDISTANCE(T) ends</td></tr><tr><td>Figure 10</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF6": {
"text": "is |Q| 2 . Second, the space required for storing B is at most in O(|E| 2 ) because for each open parenthesis transition e, the size of |B[n[e], i[e]]|",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF7": {
"text": "is non-infinity. Moreover, for each open parenthesis transition e, there exists a unique close parenthesis transition e \u2032 such that e \u2032 \u2208 B[n[e], i[e]]. When each component of the RTN is acyclic, the complexity of the algorithm is O(|T|) in time and space.",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF8": {
"text": "Translation complexity of target language representations for translation grammars of rank 2.",
"content": "<table><tr><td>Representation</td><td colspan=\"2\">Time Complexity Space Complexity</td></tr><tr><td colspan=\"2\">CFG/hypergraph O(|s| 3 |G| |M| 3 )</td><td>O(|s| 3 |G| |M| 3 )</td></tr><tr><td>PDA</td><td>O(|s| 3 |G| |M| 3 )</td><td>O(|s| 3 |G| |M| 2 )</td></tr><tr><td>FSA</td><td>O(e |s| 3 |G| |M|)</td><td>O(e |s| 3 |G| |M|)</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF10": {
"text": "Success in finding the 1-best translation under G with various M \u03b8 1 under a memory size limit of 10GB as measured over tune-nw (1,755 sentences). We note which operations in translation exceeded the memory limit: either Expansion and Intersection for HiFST, or Intersection and Shortest Path operation for HiPDT.",
"content": "<table><tr><td/><td/><td colspan=\"5\">Decoding with G + M \u03b8 1 under a 10GB memory size limit</td></tr><tr><td>#</td><td>\u03b8</td><td/><td>HiFST</td><td/><td/><td>HiPDT</td></tr><tr><td/><td/><td>Success</td><td>Failure</td><td/><td>Success</td><td>Failure</td></tr><tr><td/><td/><td/><td colspan=\"2\">Expansion Intersection</td><td/><td colspan=\"2\">Intersection Shortest Path</td></tr><tr><td colspan=\"2\">2 7.5 \u00d7 10 \u22129 3 7.5 \u00d7 10 \u22128 4 7.5 \u00d7 10 \u22127</td><td>12% 16% 18%</td><td>51% 53% 53%</td><td>37% 31% 29%</td><td>40% 76% 99.8%</td><td>8% 1% 0%</td><td>52% 23% 0.2%</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF12": {
"text": "What are the speed and quality tradeoffs for HiPDT as a function of first-pass LM size and translation grammar complexity?",
"content": "<table><tr><td>5.3</td><td/><td/></tr><tr><td>\u03b2</td><td colspan=\"2\">Kneser-Ney Katz</td></tr><tr><td>8</td><td>814</td><td>619</td></tr><tr><td>12</td><td>343</td><td>212</td></tr><tr><td>15</td><td>240</td><td>110</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF13": {
"text": "= z D then \u22b2 If the stack has changed, D needs to be cleared and recomputed \u2208 B[q, z |z| ] do \u22b2 For each close paren. transition balancing the incoming z |z| -labeled open paren. transition in q \u2032 \u2208 B[n[e], i[e]] do \u22b2 For each close paren. transition e \u2032 that balances e 27 w \u2190 w[e] + d R [n[e], p[e \u2032 ]] + w[e \u2032 ] \u22b2 w: weight of the shortest bal. path beginning by e and ending by e \u2032 in T 28 r \u2190 r \u2228 RETAINPATH (q, z, w, n[e \u2032 ]) \u22b2 Does the expansion of that path belong to an accepting path below threshold? 29 w F \u2190 min(w F , d R [n[e], p[e \u2032 ]] + w[e \u2032 ] + d[(n[e \u2032 ], z)])",
"content": "<table><tr><td colspan=\"2\">10 while S =\u2205 do</td></tr><tr><td>11</td><td>(q, z) \u2190 HEAD (S)</td></tr><tr><td>12</td><td>DEQUEUE (S)</td></tr><tr><td>13</td><td>if s[(q, z)] = (q, z) then</td></tr><tr><td colspan=\"2\">14 if z 15 CLEAR (D)</td></tr><tr><td colspan=\"2\">16 17 for each e 18 z D \u2190 z D[p[e]] \u2190 min(D[p[e]], w[e] + d[(n[e], z 1 \u2022 \u2022 \u2022 z |z|\u22121 )]) 19 for each e \u2208 E[q] do</td></tr><tr><td>20</td><td>if i[e] \u2208 \u03a3 \u222a {\u01eb} then \u22b2 If i[e] is a regular symbol</td></tr><tr><td>21 22</td><td>if RETAINPATH (q, z, w[e], n[e]) then E \u2032 \u2190 E \u2032 \u222a {((q, z), i[e], o[e], w[e], (n[e], z))}</td></tr><tr><td>23 24</td><td>elseif i[e] \u2208 \u03a0 then \u22b2 If i[e] is an open parenthesis z \u2032 \u2190 zi[e]</td></tr><tr><td>25</td><td>r \u2190 false</td></tr><tr><td>26</td><td>for each e</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF14": {
"text": "to (q \u2032 , z) of weight w 3 w F \u2190 min{d R [q \u2032 , t] + D[t]|D[t] = \u221e} \u22b2 Current estimate of s. d. from (q \u2032 , z) to f \u2032 4 if w I < d[(q \u2032 , z)] then \u22b2 If w I is a better estimate of s.-d. from I \u2032 to (q \u2032 , z), update d[(q \u2032 , z)] and s[(q \u2032 , z)] < d[(q \u2032 , z)] then \u22b2 If w F is a better estimate of s. d. from (q \u2032 , z) to f \u2032 , update d[(q \u2032 , z)]",
"content": "<table><tr><td>5 6</td><td>d[(q \u2032 , z)] \u2190 w I s[(q \u2032 , z)] \u2190 s[(q, z)]</td></tr><tr><td colspan=\"2\">7 if w F 8 d[(q \u2032</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
}
}
}
}