ACL-OCL / Base_JSON /prefixD /json /D10 /D10-1027.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D10-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:53:02.189223Z"
},
"title": "Efficient Incremental Decoding for Tree-to-String Translation",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California",
"location": {
"addrLine": "4676 Admiralty Way, Suite 1001 Marina del Rey",
"postCode": "90292",
"region": "CA",
"country": "USA"
}
},
"email": "lhuang@isi.edu"
},
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California",
"location": {
"addrLine": "4676 Admiralty Way, Suite 1001 Marina del Rey",
"postCode": "90292",
"region": "CA",
"country": "USA"
}
},
"email": "haitaomi@isi.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Syntax-based translation models should in principle be efficient with polynomially-sized search space, but in practice they are often embarassingly slow, partly due to the cost of language model integration. In this paper we borrow from phrase-based decoding the idea to generate a translation incrementally left-to-right, and show that for tree-to-string models, with a clever encoding of derivation history, this method runs in averagecase polynomial-time in theory, and lineartime with beam search in practice (whereas phrase-based decoding is exponential-time in theory and quadratic-time in practice). Experiments show that, with comparable translation quality, our tree-to-string system (in Python) can run more than 30 times faster than the phrase-based system Moses (in C++).",
"pdf_parse": {
"paper_id": "D10-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "Syntax-based translation models should in principle be efficient with polynomially-sized search space, but in practice they are often embarassingly slow, partly due to the cost of language model integration. In this paper we borrow from phrase-based decoding the idea to generate a translation incrementally left-to-right, and show that for tree-to-string models, with a clever encoding of derivation history, this method runs in averagecase polynomial-time in theory, and lineartime with beam search in practice (whereas phrase-based decoding is exponential-time in theory and quadratic-time in practice). Experiments show that, with comparable translation quality, our tree-to-string system (in Python) can run more than 30 times faster than the phrase-based system Moses (in C++).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Most efforts in statistical machine translation so far are variants of either phrase-based or syntax-based models. From a theoretical point of view, phrasebased models are neither expressive nor efficient: they typically allow arbitrary permutations and resort to language models to decide the best order. In theory, this process can be reduced to the Traveling Salesman Problem and thus requires an exponentialtime algorithm (Knight, 1999) . In practice, the decoder has to employ beam search to make it tractable (Koehn, 2004) . However, even beam search runs in quadratic-time in general (see Sec. 2), unless a small distortion limit (say, d=5) further restricts the possible set of reorderings to those local ones by ruling out any long-distance reorderings that have a \"jump\" in theory in practice phrase-based exponential quadratic tree-to-string polynomial linear Table 1 : [main result] Time complexity of our incremental tree-to-string decoding compared with phrase-based. In practice means \"approximate search with beams.\" longer than d. This has been the standard practice with phrase-based models (Koehn et al., 2007) , which fails to capture important long-distance reorderings like SVO-to-SOV. Syntax-based models, on the other hand, use syntactic information to restrict reorderings to a computationally-tractable and linguisticallymotivated subset, for example those generated by synchronous context-free grammars (Wu, 1997; Chiang, 2007) . In theory the advantage seems quite obvious: we can now express global reorderings (like SVO-to-VSO) in polynomial-time (as opposed to exponential in phrase-based). But unfortunately, this polynomial complexity is super-linear (being generally cubic-time or worse), which is slow in practice. Furthermore, language model integration becomes more expensive here since the decoder now has to maintain target-language boundary words at both ends of a subtranslation (Huang and Chiang, 2007) , whereas a phrase-based decoder only needs to do this at one end since the translation is always growing left-to-right. As a result, syntax-based models are often embarassingly slower than their phrase-based counterparts, preventing them from becoming widely useful.",
"cite_spans": [
{
"start": 426,
"end": 440,
"text": "(Knight, 1999)",
"ref_id": "BIBREF8"
},
{
"start": 515,
"end": 528,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF10"
},
{
"start": 1109,
"end": 1129,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF9"
},
{
"start": 1430,
"end": 1440,
"text": "(Wu, 1997;",
"ref_id": "BIBREF19"
},
{
"start": 1441,
"end": 1454,
"text": "Chiang, 2007)",
"ref_id": "BIBREF0"
},
{
"start": 1920,
"end": 1944,
"text": "(Huang and Chiang, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 871,
"end": 878,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Can we combine the merits of both approaches? While other authors have explored the possibilities of enhancing phrase-based decoding with syntaxaware reordering (Galley and Manning, 2008) , we are more interested in the other direction, i.e., can syntax-based models learn from phrase-based decoding, so that they still model global reordering, but in an efficient (preferably linear-time) fashion? Watanabe et al. (2006) is an early attempt in this direction: they design a phrase-based-style decoder for the hierarchical phrase-based model (Chiang, 2007) . However, this algorithm even with the beam search still runs in quadratic-time in practice. Furthermore, their approach requires grammar transformation that converts the original grammar into an equivalent binary-branching Greibach Normal Form, which is not always feasible in practice.",
"cite_spans": [
{
"start": 161,
"end": 187,
"text": "(Galley and Manning, 2008)",
"ref_id": "BIBREF3"
},
{
"start": 399,
"end": 421,
"text": "Watanabe et al. (2006)",
"ref_id": "BIBREF18"
},
{
"start": 542,
"end": 556,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We take a fresh look on this problem and turn our focus to one particular syntax-based paradigm, treeto-string translation (Liu et al., 2006; Huang et al., 2006) , since this is the simplest and fastest among syntax-based approaches. We develop an incremental dynamic programming algorithm and make the following contributions:",
"cite_spans": [
{
"start": 123,
"end": 141,
"text": "(Liu et al., 2006;",
"ref_id": "BIBREF11"
},
{
"start": 142,
"end": 161,
"text": "Huang et al., 2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 we show that, unlike previous work, our incremental decoding algorithm runs in averagecase polynomial-time in theory for tree-tostring models, and the beam search version runs in linear-time in practice (see Table 1 );",
"cite_spans": [],
"ref_spans": [
{
"start": 210,
"end": 217,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 large-scale experiments on a tree-to-string system confirm that, with comparable translation quality, our incremental decoder (in Python) can run more than 30 times faster than the phrase-based system Moses (in C++) (Koehn et al., 2007) ;",
"cite_spans": [
{
"start": 218,
"end": 238,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 furthermore, on the same tree-to-string system, incremental decoding is slightly faster than the standard cube pruning method at the same level of translation quality;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 this is also the first linear-time incremental decoder that performs global reordering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We will first briefly review phrase-based decoding in this section, which inspires our incremental algorithm in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We will use the following running example from Chinese to English to explain both phrase-based and syntax-based decoding throughout this paper:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Phrase-based Decoding",
"sec_num": "2"
},
{
"text": "0 B\u00f9sh\u00ed 1 Bush y\u01d4 2 with Sh\u0101l\u00f3ng 3 Sharon j\u01d4x\u00edng 4 hold le -ed 5 hu\u00ect\u00e1n 6 meeting",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Phrase-based Decoding",
"sec_num": "2"
},
{
"text": "'Bush held talks with Sharon'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Phrase-based Decoding",
"sec_num": "2"
},
{
"text": "Phrase-based decoders generate partial targetlanguage outputs in left-to-right order in the form of hypotheses (Koehn, 2004) . Each hypothesis has a coverage vector capturing the source-language words translated so far, and can be extended into a longer hypothesis by a phrase-pair translating an uncovered segment. This process can be formalized as a deductive system. For example, the following deduction step grows a hypothesis by the phrase-pair y\u01d4 Sh\u0101l\u00f3ng, with Sharon covering Chinese span [1-3]:",
"cite_spans": [
{
"start": 111,
"end": 124,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Dynamic Programming Algorithm",
"sec_num": "2.1"
},
{
"text": "(\u2022 \u2022\u2022\u2022 6 ) : (w, \"Bush held talks\") (\u2022\u2022\u2022 3 \u2022\u2022\u2022) : (w \u2032 , \"Bush held talks with Sharon\") (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Dynamic Programming Algorithm",
"sec_num": "2.1"
},
{
"text": "where a \u2022 in the coverage vector indicates the source word at this position is \"covered\" and where w and w \u2032 = w+c+d are the weights of the two hypotheses, respectively, with c being the cost of the phrase-pair, and d being the distortion cost. To compute d we also need to maintain the ending position of the last phrase (the 3 and 6 in the coverage vector).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Dynamic Programming Algorithm",
"sec_num": "2.1"
},
{
"text": "To add a bigram model, we split each \u2212LM item above into a series of +LM items; each +LM item has the form (v, a ) where a is the last word of the hypothesis. Thus a +LM version of (1) might be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Dynamic Programming Algorithm",
"sec_num": "2.1"
},
{
"text": "(\u2022 \u2022\u2022\u2022 6 , talks ) : (w, \"Bush held talks\") (\u2022\u2022\u2022 3 \u2022\u2022\u2022, Sharon ) : (w \u2032 , \"Bush held talks with Sharon\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Dynamic Programming Algorithm",
"sec_num": "2.1"
},
{
"text": "where the score of the resulting +LM item",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Dynamic Programming Algorithm",
"sec_num": "2.1"
},
{
"text": "w \u2032 = w + c + d \u2212 log P lm (with | talk)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Dynamic Programming Algorithm",
"sec_num": "2.1"
},
{
"text": "now includes a combination cost due to the bigrams formed when applying the phrase-pair. The complexity of this dynamic programming algorithm for g-gram decoding is O(2 n n 2 |V | g\u22121 ) where n is the sentence length and |V | is the English vocabulary size (Huang and Chiang, 2007) . ",
"cite_spans": [
{
"start": 257,
"end": 281,
"text": "(Huang and Chiang, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Dynamic Programming Algorithm",
"sec_num": "2.1"
},
{
"text": "To make the exponential algorithm practical, beam search is the standard approximate search method (Koehn, 2004) . Here we group +LM items into n bins, with each bin B i hosting at most b items that cover exactly i Chinese words (see Figure 1 ). The complexity becomes O(n 2 b) because there are a total of O(nb) items in all bins, and to expand each item we need to scan the whole coverage vector, which costs O(n). This quadratic complexity is still too slow in practice and we often set a small distortion limit of d max (say, 5) so that no jumps longer than d max are allowed. This method reduces the complexity to O(nbd max ) but fails to capture longdistance reorderings (Galley and Manning, 2008) .",
"cite_spans": [
{
"start": 99,
"end": 112,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF10"
},
{
"start": 677,
"end": 703,
"text": "(Galley and Manning, 2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 234,
"end": 242,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Beam Search in Practice",
"sec_num": "2.2"
},
{
"text": "We will first briefly review tree-to-string translation paradigm and then develop an incremental decoding algorithm for it inspired by phrase-based decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Decoding for Tree-to-String Translation",
"sec_num": "3"
},
{
"text": "A typical tree-to-string system (Liu et al., 2006; Huang et al., 2006) performs translation in two steps: parsing and decoding. A parser first parses the source language input into a 1-best tree T , and the decoder then searches for the best derivation (a se- quence of translation steps) d * that converts source tree T into a target-language string. Figure 3 shows how this process works. The Chinese sentence (a) is first parsed into tree (b), which will be converted into an English string in 5 steps. First, at the root node, we apply rule r 1 preserving the top-level word-order",
"cite_spans": [
{
"start": 32,
"end": 50,
"text": "(Liu et al., 2006;",
"ref_id": "BIBREF11"
},
{
"start": 51,
"end": 70,
"text": "Huang et al., 2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 352,
"end": 360,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Tree-to-string Translation",
"sec_num": "3.1"
},
{
"text": "(a) B\u00f9sh\u00ed [y\u01d4 Sh\u0101l\u00f3ng ] 1 [j\u01d4x\u00edng le hu\u00ect\u00e1n ] 2 \u21d3 1-best parser (b) IP @\u01eb NP @1 B\u00f9sh\u00ed VP @2 PP @2.1 P y\u01d4 NP @2.1.2 Sh\u0101l\u00f3ng VP @2.2 VV j\u01d4x\u00edng AS le NP @2.2.3 hu\u00ect\u00e1n r 1 \u21d3 (c) NP @1 B\u00f9sh\u00ed VP @2 PP @2.1 P y\u01d4 NP @2.1.2 Sh\u0101l\u00f3ng VP @2.2 VV j\u01d4x\u00edng AS le NP @2.2.3 hu\u00ect\u00e1n r 2 \u21d3 r 3 \u21d3 (d) Bush held NP @2.2.3 hu\u00ect\u00e1n with NP @2.1.2 Sh\u0101l\u00f3ng r 4 \u21d3 r 5 \u21d3 (e) Bush [held talks] 2 [with Sharon] 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-to-string Translation",
"sec_num": "3.1"
},
{
"text": "(r 1 ) IP (x 1 :NP x 2 :VP) \u2192 x 1 x 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-to-string Translation",
"sec_num": "3.1"
},
{
"text": "which results in two unfinished subtrees, NP @1 and VP @2 in (c). Here X @\u03b7 denotes a tree node of label X at tree address \u03b7 (Shieber et al., 1995) . (The root node has address \u01eb, and the first child of node \u03b7 has address \u03b7.1, etc.) Then rule r 2 grabs the B\u00f9sh\u00ed subtree and transliterate it into the English word in theory in practice phrase* \"Bush\". Similarly, rule r 3 shown in Figure 2 is applied to the VP subtree, which swaps the two NPs, yielding the situation in (d). Finally two phrasal rules r 4 and r 5 translate the two remaining NPs and finish the translation. In this framework, decoding without language model (\u2212LM decoding) is simply a linear-time depth-first search with memoization (Huang et al., 2006) , since a tree of n words is also of size O(n) and we visit every node only once. Adding a language model, however, slows it down significantly because we now have to keep track of targetlanguage boundary words, but unlike the phrasebased case in Section 2, here we have to remember both sides the leftmost and the rightmost boundary words: each node is now split into +LM items like (\u03b7 a \u22c6 b ) where \u03b7 is a tree node, and a and b are left and right English boundary words. For example, a bigram +LM item for node VP @2 might be",
"cite_spans": [
{
"start": 125,
"end": 147,
"text": "(Shieber et al., 1995)",
"ref_id": "BIBREF15"
},
{
"start": 700,
"end": 720,
"text": "(Huang et al., 2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 381,
"end": 389,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Tree-to-string Translation",
"sec_num": "3.1"
},
{
"text": "O(2 n n 2 \u2022 |V | g\u22121 ) O(n 2 b) tree-to-str O(nc \u2022 |V | 4(g\u22121) ) O(ncb 2 ) this work* O(n k log 2 (cr) \u2022 |V | g\u22121 ) O(ncb)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-to-string Translation",
"sec_num": "3.1"
},
{
"text": "(VP @2 held \u22c6 Sharon ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-to-string Translation",
"sec_num": "3.1"
},
{
"text": "This is also the case with other syntax-based models like Hiero or GHKM: language model integration overhead is the most significant factor that causes syntax-based decoding to be slow (Chiang, 2007) . In theory +LM decoding is O(nc|V | 4(g\u22121) ), where V denotes English vocabulary (Huang, 2007) . In practice we have to resort to beam search again: at each node we would only allow top-b +LM items. With beam search, tree-to-string decoding with an integrated language model runs in time O(ncb 2 ), where b is the size of the beam at each node, and c is (maximum) number of translation rules matched at each node (Huang, 2007) . See Table 2 for a summary.",
"cite_spans": [
{
"start": 185,
"end": 199,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF0"
},
{
"start": 282,
"end": 295,
"text": "(Huang, 2007)",
"ref_id": "BIBREF7"
},
{
"start": 614,
"end": 627,
"text": "(Huang, 2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 634,
"end": 641,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Tree-to-string Translation",
"sec_num": "3.1"
},
{
"text": "Can we borrow the idea of phrase-based decoding, so that we also grow the hypothesis strictly leftto-right, and only need to maintain the rightmost boundary words?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "The key intuition is to adapt the coverage-vector idea from phrase-based decoding to tree-to-string decoding. Basically, a coverage-vector keeps track of which Chinese spans have already been translated and which have not. Similarly, here we might need a \"tree coverage-vector\" that indicates which subtrees have already been translated and which have not. But unlike in phrase-based decoding, we can not simply choose any arbitrary uncovered subtree for the next step, since rules already dictate which subtree to visit next. In other words what we need here is not really a tree coverage vector, but more of a derivation history.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "We develop this intuition into an agenda represented as a stack. Since tree-to-string decoding is a top-down depth-first search, we can simulate this recursion with a stack of active rules, i.e., rules that are not completed yet. For example we can simulate the derivation in Figure 3 as follows. At the root node IP @\u01eb , we choose rule r 1 , and push its English-side to the stack, with variables replaced by matched tree nodes, here x 1 for NP @1 and x 2 for VP @2 . So we have the following stack",
"cite_spans": [],
"ref_spans": [
{
"start": 276,
"end": 284,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "s = [ NP @1 VP @2 ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "where the dot indicates the next symbol to process in the English word-order. Since node NP @1 is the first in the English word-order, we expand it first, and push rule r 2 rooted at NP to the stack:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "[ NP @1 VP @2 ] [ Bush].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "Since the symbol right after the dot in the top rule is a word, we immediately grab it, and append it to the current hypothesis, which results in the new stack",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "[ NP @1 VP @2 ] [Bush ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "Now the top rule on the stack has finished (dot is at the end), so we trigger a \"pop\" operation which pops the top rule and advances the dot in the second-totop rule, denoting that NP @1 is now completed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "[NP @1 VP @2 ]. stack hypothesis [<s> IP @\u01eb </s>] <s> p [<s> IP @\u01eb </s>] [ NP @1 VP @2 ] <s> p [<s> IP @\u01eb </s>] [ NP @1 VP @2 ] [ Bush] <s> s [<s> IP @\u01eb </s>] [ NP @1 VP @2 ] [Bush ] <s> Bush c [<s> IP @\u01eb </s>] [NP @1 VP @2 ] <s> Bush p [<s> IP @\u01eb </s>] [NP @1 VP @2 ] [ held NP @2.2.3 with NP @2.1.2 ] <s> Bush s [<s> IP @\u01eb </s>] [NP @1 VP @2 ] [held NP @2.2.3 with NP @2.1.2 ] <s> Bush held p [<s> IP @\u01eb </s>] [NP @1 VP @2 ] [held NP @2.2.3 with NP @2.1.2 ] [ talks] <s> Bush held s [<s> IP @\u01eb </s>] [NP @1 VP @2 ] [held NP @2.2.3 with NP @2.1.2 ] [talks ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "<s> Bush held talks c [<s> IP @\u01eb </s>] [NP @1 VP @2 ] [held NP @2.2.3 with NP @2. Figure 4 : Simulation of tree-to-string derivation in Figure 3 in the incremental decoding algorithm. Actions: p, predict; s, scan; c, complete (see Figure 5 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 90,
"text": "Figure 4",
"ref_id": null
},
{
"start": 136,
"end": 144,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 231,
"end": 239,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "Item \u2113 : s, \u03c1 : w; \u2113: step, s: stack, \u03c1: hypothesis, w: weight The next step is to expand VP @2 , and we use rule r 3 and push its English-side \"VP \u2192 held x 2 with x 1 \" onto the stack, again with variables replaced by matched nodes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "Equivalence \u2113 : s, \u03c1 \u223c \u2113 : s \u2032 , \u03c1 \u2032 iff. s = s \u2032 and last g\u22121 (\u03c1) = last g\u22121 (\u03c1 \u2032 ) Axiom 0 : [<s> g\u22121 \u01eb </s>], <s> g\u22121 : 0 Predict \u2113 : ... [\u03b1 \u03b7 \u03b2], \u03c1 : w \u2113 + |C(r)| : ... [\u03b1 \u03b7 \u03b2] [ f (\u03b7, E(r))], \u03c1 : w + c(r) match(\u03b7, C(r)) Scan \u2113 : ... [\u03b1 e \u03b2], \u03c1 : w \u2113 : ... [\u03b1 e \u03b2], \u03c1e : w \u2212 log Pr(e | last g\u22121 (\u03c1)) Complete \u2113 : ... [\u03b1 \u03b7 \u03b2] [\u03b3 ], \u03c1 : w \u2113 : ... [\u03b1 \u03b7 \u03b2], \u03c1 : w Goal |T | : [<s> g\u22121 \u01eb </s> ], \u03c1</s> : w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "[NP @1 VP @2 ] [ held NP @2.2.3 with NP @2.1.2 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "Note that this is a reordering rule, and the stack always follows the English word order because we generate hypothesis incrementally left-to-right. Figure 4 works out the full example. We formalize this algorithm in Figure 5 . Each item s, \u03c1 consists of a stack s and a hypothesis \u03c1. Similar to phrase-based dynamic programming, only the last g\u22121 words of \u03c1 are part of the signature for decoding with g-gram LM. Each stack is a list of dotted rules, i.e., rules with dot positions indicting progress, in the style of Earley (1970) . We call the last (rightmost) rule on the stack the top rule, which is the rule being processed currently. The symbol after the dot in the top rule is called the next symbol, since it is the symbol to expand or process next. Depending on the next symbol a, we can perform one of the three actions:",
"cite_spans": [
{
"start": 519,
"end": 532,
"text": "Earley (1970)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 149,
"end": 157,
"text": "Figure 4",
"ref_id": null
},
{
"start": 217,
"end": 225,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "\u2022 if a is a node \u03b7, we perform a Predict action which expands \u03b7 using a rule r that can patternmatch the subtree rooted at \u03b7; we push r is to the stack, with the dot at the beginning;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "\u2022 if a is an English word, we perform a Scan action which immediately adds it to the current hypothesis, advancing the dot by one position;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "\u2022 if the dot is at the end of the top rule, we perform a Complete action which simply pops stack and advance the dot in the new top rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental Decoding",
"sec_num": "3.2"
},
{
"text": "Unlike phrase-based models, we show here that incremental decoding runs in average-case polynomial-time for tree-to-string systems. Proof. The time complexity depends (in part) on the number of all possible stacks for a tree of depth d. A stack is a list of rules covering a path from the root node to one of the leaf nodes in the following form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "R 1 [... \u03b7 1 ...] R 2 [... \u03b7 2 ...] ... Rs [... \u03b7 s ...],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "where \u03b7 1 = \u01eb is the root node and \u03b7 s is a leaf node, with stack depth s \u2264 d. Each rule R i (i > 1) expands node \u03b7 i\u22121 , and thus has c choices by the definition of grammar constant c. Furthermore, each rule in the stack is actually a dotted-rule, i.e., it is associated with a dot position ranging from 0 to r, where r is the arity of the rule (length of English side of the rule). So the total number of stacks is O((cr) d ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "Besides the stack, each state also maintains (g\u22121) rightmost words of the hypothesis as the language model signature, which amounts to O(|V | g\u22121 ). So the total number of states is O((cr) d |V | g\u22121 ). Following previous work (Chiang, 2007) , we assume a constant number of English translations for each foreign word in the input sentence, so |V | = O(n). And as mentioned above, for each state, there are c possible expansions, so the overall time complexity is f (n, d) = c(cr)",
"cite_spans": [
{
"start": 227,
"end": 241,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "d |V | g\u22121 = O((cr) d n g\u22121 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "We do average-case analysis below because the tree depth (height) for a sentence of n words is a random variable: in the worst-case it can be linear in n (degenerated into a linear-chain), but we assume this adversarial situation does not happen frequently, and the average tree depth is O(log n). Theorem 1. Assume for each n, the depth of a parse tree of n words, notated d n , distributes normally with logarithmic mean and variance, i.e., d n \u223c N (\u00b5 n , \u03c3 2 n ), where \u00b5 n = O(log n) and \u03c3 2 n = O(log n), then the average-case complexity of the algorithm is h(n) = O(n k log 2 (cr)+g\u22121 ) for constant k, thus polynomial in n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "Proof. From Lemma 1 and the definition of averagecase complexity, we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "h(n) = E dn\u223cN (\u00b5n,\u03c3 2 n ) [f (n, d n )],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "E x\u223cD [\u2022]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "denotes the expectation with respect to the random variable x in distribution D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "h(n) = E dn\u223cN (\u00b5n,\u03c3 2 n ) [f (n, d n )] = E dn\u223cN (\u00b5n,\u03c3 2 n ) [O((cr) dn n g\u22121 )], = O(n g\u22121 E dn\u223cN (\u00b5n,\u03c3 2 n ) [(cr) dn ]), = O(n g\u22121 E dn\u223cN (\u00b5n,\u03c3 2 n ) [exp(d n log(cr))]) (2) Since d n \u223c N (\u00b5 n , \u03c3 2 n ) is a normal distribution, d n log(cr) \u223c N (\u00b5 \u2032 , \u03c3 \u20322 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "is also a normal distribution, where \u00b5 \u2032 = \u00b5 n log(cr) and \u03c3 \u2032 = \u03c3 n log(cr). Therefore exp(d n log(cr)) is a log-normal distribution, and by the property of log-normal distribution, its expectation is exp (\u00b5 \u2032 + \u03c3 \u20322 /2). So we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "E dn\u223cN (\u00b5n,\u03c3 2 /2) [exp(d n log(cr))] = exp (\u00b5 \u2032 + \u03c3 \u20322 /2) = exp (\u00b5 n log(cr) + \u03c3 2 n log 2 (cr)/2) = exp (O(log n) log(cr) + O(log n) log 2 (cr)/2) = exp (O(log n) log 2 (cr)) \u2264 exp (k(log n) log 2 (cr)),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "for some constant k = exp (log n k log 2 (cr) ) = n k log 2 (cr) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "Plug it back to Equation 2, and we have the average-case complexity",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E dn [f (n, d n )] \u2264 O(n g\u22121 n k log 2 (cr) ) = O(n k log 2 (cr)+g\u22121 ).",
"eq_num": "(4)"
}
],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "Since k, c, r and g are constants, the average-case complexity is polynomial in sentence length n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "The assumption d n \u223c N (O(log n), O(log n)) will be empirically verified in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polynomial Time Complexity",
"sec_num": "3.3"
},
{
"text": "Though polynomial complexity is a desirable property in theory, the degree of the polynomial, O(log cr) might still be too high in practice, depending on the translation grammar. To make it lineartime, we apply the beam search idea from phrasebased again. And once again, the only question to decide is the choice of \"binning\": how to assign each item to a particular bin, depending on their progress?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear-time Beam Search",
"sec_num": "3.4"
},
{
"text": "While the number of Chinese words covered is a natural progress indicator for phrase-based, it does not work for tree-to-string because, among the three actions, only scanning grows the hypothesis. The prediction and completion actions do not make real progress in terms of words, though they do make progress on the tree. So we devise a novel progress indicator natural for tree-to-string translation: the number of tree nodes covered so far. Initially that number is zero, and in a prediction step which expands node \u03b7 using rule r, the number increments by |C(r)|, the size of the Chinese-side treelet of r. For example, a prediction step using rule r 3 in Figure 2 to expand VP @2 will increase the tree-node count by |C(r 3 )| = 6, since there are six tree nodes in that rule (not counting leaf nodes or variables).",
"cite_spans": [],
"ref_spans": [
{
"start": 660,
"end": 668,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Linear-time Beam Search",
"sec_num": "3.4"
},
{
"text": "Scanning and completion do not make progress in this definition since there is no new tree node covered. In fact, since both of them are deterministic operations, they are treated as \"closure\" operators in the real implementation, which means that after a prediction, we always do as many scanning/completion steps as possible until the symbol after the dot is another node, where we have to wait for the next prediction step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear-time Beam Search",
"sec_num": "3.4"
},
{
"text": "This method has |T | = O(n) bins where |T | is the size of the parse tree, and each bin holds b items. Each item can expand to c new items, so the overall complexity of this beam search is O(ncb), which is linear in sentence length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear-time Beam Search",
"sec_num": "3.4"
},
{
"text": "The work of Watanabe et al. (2006) is closest in spirit to ours: they also design an incremental decoding algorithm, but for the hierarchical phrase-based system (Chiang, 2007) instead. While we leave detailed comparison and theoretical analysis to a future work, here we point out some obvious differences:",
"cite_spans": [
{
"start": 12,
"end": 34,
"text": "Watanabe et al. (2006)",
"ref_id": "BIBREF18"
},
{
"start": 162,
"end": 176,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "1. due to the difference in the underlying translation models, their algorithm runs in O(n 2 b) time with beam search in practice while ours is linear. This is because each prediction step now has O(n) choices, since they need to expand nodes like VP[1, 6] as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "VP[1,6] \u2192 PP[1, i] VP[i, 6],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "where the midpoint i in general has O(n) choices (just like in CKY). In other words, their grammar constant c becomes O(n).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "2. different binning criteria: we use the number of tree nodes covered, while they stick to the orig-inal phrase-based idea of number of Chinese words translated;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "3. as a result, their framework requires grammar transformation into the binary-branching Greibach Normal Form (which is not always possible) so that the resulting grammar always contain at least one Chinese word in each rule in order for a prediction step to always make progress. Our framework, by contrast, works with any grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Besides, there are some other efforts less closely related to ours. As mentioned in Section 1, while we focus on enhancing syntax-based decoding with phrase-based ideas, other authors have explored the reverse, but also interesting, direction of enhancing phrase-based decoding with syntax-aware reordering. For example Galley and Manning (2008) propose a shift-reduce style method to allow hieararchical non-local reorderings in a phrase-based decoder. While this approach is certainly better than pure phrase-based reordering, it remains quadratic in run-time with beam search.",
"cite_spans": [
{
"start": 320,
"end": 345,
"text": "Galley and Manning (2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Within syntax-based paradigms, cube pruning (Chiang, 2007; Huang and Chiang, 2007) has become the standard method to speed up +LM decoding, which has been shown by many authors to be highly effective; we will be comparing our incremental decoder with a baseline decoder using cube pruning in Section 5. It is also important to note that cube pruning and incremental decoding are not mutually exclusive, rather, they could potentially be combined to further speed up decoding. We leave this point to future work.",
"cite_spans": [
{
"start": 44,
"end": 58,
"text": "(Chiang, 2007;",
"ref_id": "BIBREF0"
},
{
"start": 59,
"end": 82,
"text": "Huang and Chiang, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Multipass coarse-to-fine decoding is another popular idea (Venugopal et al., 2007; Zhang and Gildea, 2008; Dyer and Resnik, 2010) . In particular, Dyer and Resnik (2010) uses a two-pass approach, where their first-pass, \u2212LM decoding is also incremental and polynomial-time (in the style of Earley (1970) algorithm), but their second-pass, +LM decoding is still bottom-up CKY with cube pruning.",
"cite_spans": [
{
"start": 58,
"end": 82,
"text": "(Venugopal et al., 2007;",
"ref_id": "BIBREF17"
},
{
"start": 83,
"end": 106,
"text": "Zhang and Gildea, 2008;",
"ref_id": "BIBREF20"
},
{
"start": 107,
"end": 129,
"text": "Dyer and Resnik, 2010)",
"ref_id": "BIBREF1"
},
{
"start": 290,
"end": 303,
"text": "Earley (1970)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "To test the merits of our incremental decoder we conduct large-scale experiments on a state-of-the-art tree-to-string system, and compare it with the standard phrase-based system of Moses. Furturemore we also compare our incremental decoder with the standard cube pruning approach on the same tree-tostring decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Our training corpus consists of 1.5M sentence pairs with about 38M/32M words in Chinese/English, respectively. We first word-align them by GIZA++ and then parse the Chinese sentences using the Berkeley parser (Petrov and Klein, 2007) , then apply the GHKM algorithm (Galley et al., 2004) to extract tree-to-string translation rules. We use SRILM Toolkit (Stolcke, 2002) to train a trigram language model with modified Kneser-Ney smoothing on the target side of training corpus. At decoding time, we again parse the input sentences into trees, and convert them into translation forest by rule patternmatching (Mi et al., 2008) .",
"cite_spans": [
{
"start": 209,
"end": 233,
"text": "(Petrov and Klein, 2007)",
"ref_id": "BIBREF14"
},
{
"start": 266,
"end": 287,
"text": "(Galley et al., 2004)",
"ref_id": "BIBREF4"
},
{
"start": 354,
"end": 369,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF16"
},
{
"start": 608,
"end": 625,
"text": "(Mi et al., 2008)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and System Preparation",
"sec_num": "5.1"
},
{
"text": "We use the newswire portion of 2006 NIST MT Evaluation test set (616 sentences) as our development set and the newswire portion of 2008 NIST MT Evaluation test set (691 sentences) as our test set. We evaluate the translation quality using the BLEU-4 metric, which is calculated by the script mteval-v13a.pl with its default setting which is caseinsensitive matching of n-grams. We use the standard minimum error-rate training (Och, 2003) to tune the feature weights to maximize the system's BLEU score on development set.",
"cite_spans": [
{
"start": 426,
"end": 437,
"text": "(Och, 2003)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and System Preparation",
"sec_num": "5.1"
},
{
"text": "We first verify the assumptions we made in Section 3.3 in order to prove the theorem that tree depth (as a random variable) is normally-distributed with O(log n) mean and variance. Qualitatively, we verified that for most n, tree depth d(n) does look like a normal distribution. Quantitatively, Figure 6 shows that average tree height correlates extremely well with 3.5 log n, while tree height variance is bounded by 5.5 log n.",
"cite_spans": [],
"ref_spans": [
{
"start": 295,
"end": 303,
"text": "Figure 6",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Data and System Preparation",
"sec_num": "5.1"
},
{
"text": "We implemented our incremental decoding algorithm in Python, and test its performance on the development set. We first compare it with the standard cube pruning approach (also implemented in Python) on the same tree-to-string system. ure 7(a) is a scatter plot of decoding times versus sentence length (using beam b = 50 for both systems), where we confirm that our incremental decoder scales linearly, while cube pruning has a slight tendency of superlinearity. Figure 7(b) is a side-byside comparison of decoding speed versus translation quality (in BLEU scores), using various beam sizes for both systems (b=10-70 for cube pruning, and b=10-110 for incremental). We can see that incremental decoding is slightly faster than cube pruning at the same levels of translation quality, and the difference is more pronounced at smaller beams: for number of (non-unique) pops from priority queues. example, at the lowest levels of translation quality (BLEU scores around 29.5), incremental decoding takes only 0.12 seconds, which is about 4 times as fast as cube pruning. We stress again that cube pruning and incremental decoding are not mutually exclusive, and rather they could potentially be combined to further speed up decoding.",
"cite_spans": [],
"ref_spans": [
{
"start": 463,
"end": 474,
"text": "Figure 7(b)",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Comparison with Cube pruning",
"sec_num": "5.2"
},
{
"text": "We also compare with the standard phrase-based system of Moses (Koehn et al., 2007) , with standard settings except for the ttable limit, which we set to 100. 29.41 10.8 tree-to-str: cube pruning (b=10) 29.51 0.65 tree-to-str: cube pruning (b=20) 29.96 0.96 tree-to-str: incremental (b=10) 29.54 0.32 tree-to-str: incremental (b=50) 29.96 0.77 Table 3 : Final BLEU score and speed results on the test data (691 sentences), compared with Moses and cube pruning. Time is in seconds per sentence, including parsing time (0.21s) for the two tree-to-string decoders.",
"cite_spans": [
{
"start": 63,
"end": 83,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 344,
"end": 351,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Moses",
"sec_num": "5.3"
},
{
"text": "with Moses at various distortion limits (d max =0, 6, 10, and +\u221e). Consistent with the theoretical analysis in Section 2, Moses with no distortion limit (d max = +\u221e) scale quadratically, and monotone decoding (d max = 0) scale linearly. We use MERT to tune the best weights for each distortion limit, and d max = 10 performs the best on our dev set. Table 3 reports the final results in terms of BLEU score and speed on the test set. Our linear-time incremental decoder with the small beam of size b = 10 achieves a BLEU score of 29.54, comparable to Moses with the optimal distortion limit of 10 (BLEU score 29.41). But our decoding (including source-language parsing) only takes 0.32 seconds a sentences, which is more than 30 times faster than Moses. With a larger beam of b = 50 our BLEU score increases to 29.96, which is a half BLEU point better than Moses, but still about 15 times faster.",
"cite_spans": [],
"ref_spans": [
{
"start": 350,
"end": 357,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Moses",
"sec_num": "5.3"
},
{
"text": "We have presented an incremental dynamic programming algorithm for tree-to-string translation which resembles phrase-based based decoding. This algorithm is the first incremental algorithm that runs in polynomial-time in theory, and linear-time in practice with beam search. Large-scale experiments on a state-of-the-art tree-to-string decoder confirmed that, with a comparable (or better) translation quality, it can run more than 30 times faster than the phrase-based system of Moses, even though ours is in Python while Moses in C++. We also showed that it is slightly faster (and scale better) than the popular cube pruning technique. For future work we would like to apply this algorithm to forest-based translation and hierarchical system by pruning the first-pass \u2212LM forest. We would also combine cube pruning with our incremental algorithm, and study its performance with higher-order language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our implementation of cube pruning follows(Chiang, 2007;Huang and Chiang, 2007) where besides a beam size b of unique +LM items, there is also a hard limit (of 1000) on the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank David Chiang, Kevin Knight, and Jonanthan Graehl for discussions and the anonymous reviewers for comments. In particular, we are indebted to the reviewer who pointed out a crucial mistake in Theorem 1 and its proof in the submission. This research was supported in part by DARPA, under contract HR0011-06-C-0022 under subcontract to BBN Technologies, and under DOI-NBC Grant N10AP20031, and in part by the National Natural Science Foundation of China, Contracts 60736014 and 90920004.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "201--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics, 33(2):201-208.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Context-free reordering, finite-state translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer and Philip Resnik. 2010. Context-free re- ordering, finite-state translation. In Proceedings of NAACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An efficient context-free parsing algorithm",
"authors": [
{
"first": "Jay",
"middle": [],
"last": "Earley",
"suffix": ""
}
],
"year": 1970,
"venue": "Communications of the ACM",
"volume": "13",
"issue": "2",
"pages": "94--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jay Earley. 1970. An efficient context-free parsing algo- rithm. Communications of the ACM, 13(2):94-102.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A simple and effective hierarchical phrase reordering model",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley and Christopher D. Manning. 2008. A simple and effective hierarchical phrase reordering model. In Proceedings of EMNLP 2008.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "What's in a translation rule",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "273--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a translation rule? In Pro- ceedings of HLT-NAACL, pages 273-280.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Forest rescoring: Fast decoding with integrated language models",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and David Chiang. 2007. Forest rescor- ing: Fast decoding with integrated language models. In Proceedings of ACL, Prague, Czech Rep., June.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Statistical syntax-directed translation with extended domain of locality",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proceedings of AMTA, Boston, MA, August.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Binarization, synchronous binarization, and target-side binarization",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. NAACL Workshop on Syntax and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang. 2007. Binarization, synchronous bina- rization, and target-side binarization. In Proc. NAACL Workshop on Syntax and Structure in Statistical Trans- lation.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Decoding complexity in wordreplacement translation models",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "4",
"pages": "607--615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Knight. 1999. Decoding complexity in word- replacement translation models. Computational Lin- guistics, 25(4):607-615.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL: demonstration sesion",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of ACL: demonstration sesion.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Pharaoh: a beam search decoder for phrase-based statistical machine translation models",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of AMTA",
"volume": "",
"issue": "",
"pages": "115--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Pharaoh: a beam search decoder for phrase-based statistical machine translation mod- els. In Proceedings of AMTA, pages 115-124.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Tree-tostring alignment template for statistical machine translation",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of COLING-ACL",
"volume": "",
"issue": "",
"pages": "609--616",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-to- string alignment template for statistical machine trans- lation. In Proceedings of COLING-ACL, pages 609- 616.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Forestbased translation",
"authors": [
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL: HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitao Mi, Liang Huang, and Qun Liu. 2008. Forest- based translation. In Proceedings of ACL: HLT, Columbus, OH.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Joseph",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Joseph Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL, pages 160-167.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improved inference for unlexicalized parsing",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of HLT- NAACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Principles and implementation of deductive parsing",
"authors": [
{
"first": "Stuart",
"middle": [],
"last": "Shieber",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Schabes",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 1995,
"venue": "Journal of Logic Programming",
"volume": "24",
"issue": "",
"pages": "3--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stuart Shieber, Yves Schabes, and Fernando Pereira. 1995. Principles and implementation of deductive parsing. Journal of Logic Programming, 24:3-36.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Srilm -an extensible language modeling toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ICSLP",
"volume": "30",
"issue": "",
"pages": "901--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. Srilm -an extensible lan- guage modeling toolkit. In Proceedings of ICSLP, vol- ume 30, pages 901-904.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "An efficient two-pass approach to synchronous-CFG driven statistical MT",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Venugopal",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Zollmann",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Venugopal, Andreas Zollmann, and Stephen Vo- gel. 2007. An efficient two-pass approach to synchronous-CFG driven statistical MT. In Proceed- ings of HLT-NAACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Left-to-right target generation for hierarchical phrase-based translation",
"authors": [
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukuda",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of COLING-ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taro Watanabe, Hajime Tsukuda, and Hideki Isozaki. 2006. Left-to-right target generation for hierarchical phrase-based translation. In Proceedings of COLING- ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-404.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Efficient multipass decoding for synchronous context free grammars",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhang and Daniel Gildea. 2008. Efficient multi- pass decoding for synchronous context free grammars. In Proceedings of ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Beam search in phrase-based decoding expands the hypotheses in the current bin (#2) into longer ones. held x 2 with x 1"
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Tree-to-string rule r 3 for reordering."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "An example derivation of tree-to-string translation (much simplified fromMi et al. (2008)). Shaded regions denote parts of the tree that matches the rule."
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Deductive system for the incremental tree-to-string decoding algorithm. Function last g\u22121 (\u2022) returns the rightmost g \u2212 1 words (for g-gram LM), and match(\u03b7, C(r)) tests matching of rule r against the subtree rooted at node \u03b7. C(r) and E(r) are the Chinese and English sides of rule r, and function f (\u03b7, E(r)) = [x i \u2192 \u03b7.var (i)]E(r) replaces each variable x i on the English side of the rule with the descendant node \u03b7.var (i) under \u03b7 that matches x i ."
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "For an input sentence of n words and its parse tree of depth d, the worst-case complexity of our algorithm isf (n, d) = c(cr) d |V | g\u22121 = O((cr) d n g\u22121 ), assuming relevant English vocabulary |V | = O(n), and where constants c, r and g are the maximum number of rules matching each tree node, the maximum arity of a rule, and the languagemodel order, respectively."
},
"FIGREF5": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "decoding time against sentence length (b) BLEU score against decoding time"
},
"FIGREF6": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Comparison with cube pruning. The scatter plot in (a) confirms that our incremental decoding scales linearly with sentence length, while cube pruning super-linearly (b = 50 for both). The comparison in (b) shows that at the same level of translation quality, incremental decoding is slightly faster than cube pruning, especially at smaller beams."
},
"FIGREF7": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Mean and variance of tree depth vs. sentence length. The mean depth clearly scales with 3.5 log n, and the variance is bounded by 5.5 log n."
},
"FIGREF9": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Comparison of our incremental tree-to-string decoder with Moses in terms of speed. Moses is shown with various distortion limits (0, 6, 10, +\u221e; optimal: 10)."
},
"FIGREF10": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Figure 8compares our incremental decoder system/decoder BLEU time Moses (optimal d max =10)"
},
"TABREF0": {
"text": "Summary of time complexities of various algorithms. b is the beam width, V is the English vocabulary, and c is the number of translation rules per node. As a special case, phrase-based decoding with distortion limit d max is O(nbd max ).",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF1": {
"text": "1.2 ] <s> Bush held talks s [<s> IP @\u01eb </s>] [NP @1 VP @2 ] [held NP @2.2.3 with NP @2.1.2 ] <s> Bush held talks with p [<s> IP @\u01eb </s>] [NP @1 VP @2 ] [held NP @2.2.3 with NP @2.1.2 ] [ Sharon] <s> Bush held talks with s [<s> IP @\u01eb </s>] [NP @1 VP @2 ] [held NP @2.2.3 with NP @2.1.2 ] [Sharon ] <s> Bush held talks with Sharon c [<s> IP @\u01eb </s>] [NP @1 VP @2 ] [held NP @2.2.3 with NP @2.1.2 ]",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}