{ "paper_id": "P07-1019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:51:01.327923Z" }, "title": "Forest Rescoring: Faster Decoding with Integrated Language Models *", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "", "affiliation": {}, "email": "lhuang3@cis.upenn.edu" }, { "first": "David", "middle": [], "last": "Chiang", "suffix": "", "affiliation": {}, "email": "chiang@isi.edu" }, { "first": "Dan", "middle": [], "last": "Gildea", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Jonathan", "middle": [], "last": "Graehl", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Bob", "middle": [], "last": "Moore", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Hao", "middle": [ "L H" ], "last": "Zhang", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Efficient decoding has been a fundamental problem in machine translation, especially with an integrated language model which is essential for achieving good translation quality. We develop faster approaches for this problem based on k-best parsing algorithms and demonstrate their effectiveness on both phrase-based and syntax-based MT systems. In both cases, our methods achieve significant speed improvements, often by more than a factor of ten, over the conventional beam-search method at the same levels of search error and translation accuracy.", "pdf_parse": { "paper_id": "P07-1019", "_pdf_hash": "", "abstract": [ { "text": "Efficient decoding has been a fundamental problem in machine translation, especially with an integrated language model which is essential for achieving good translation quality. We develop faster approaches for this problem based on k-best parsing algorithms and demonstrate their effectiveness on both phrase-based and syntax-based MT systems. In both cases, our methods achieve significant speed improvements, often by more than a factor of ten, over the conventional beam-search method at the same levels of search error and translation accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recent efforts in statistical machine translation (MT) have seen promising improvements in output quality, especially the phrase-based models (Och and Ney, 2004) and syntax-based models (Chiang, 2005; Galley et al., 2006) . However, efficient decoding under these paradigms, especially with integrated language models (LMs), remains a difficult problem. Part of the complexity arises from the expressive power of the translation model: for example, a phrase-or word-based model with full reordering has exponential complexity (Knight, 1999) . The language model also, if fully integrated into the decoder, introduces an expensive overhead for maintaining target-language boundary words for dynamic programming (Wu, 1996; Och and Ney, 2004) . In practice, one must prune the search space aggressively to reduce it to a reasonable size.", "cite_spans": [ { "start": 142, "end": 161, "text": "(Och and Ney, 2004)", "ref_id": "BIBREF12" }, { "start": 186, "end": 200, "text": "(Chiang, 2005;", "ref_id": "BIBREF0" }, { "start": 201, "end": 221, "text": "Galley et al., 2006)", "ref_id": "BIBREF3" }, { "start": 526, "end": 540, "text": "(Knight, 1999)", "ref_id": "BIBREF8" }, { "start": 710, "end": 720, "text": "(Wu, 1996;", "ref_id": "BIBREF16" }, { "start": 721, "end": 739, "text": "Och and Ney, 2004)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A much simpler alternative method to incorporate the LM is rescoring: we first decode without the LM (henceforth \u2212LM decoding) to produce a k-best list of candidate translations, and then rerank the k-best list using the LM. This method runs much faster in practice but often produces a considerable number of search errors since the true best translation (taking LM into account) is often outside of the k-best list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Cube pruning (Chiang, 2007 ) is a compromise between rescoring and full-integration: it rescores k subtranslations at each node of the forest, rather than only at the root node as in pure rescoring. By adapting the k-best parsing Algorithm 2 of Huang and Chiang (2005) , it achieves significant speed-up over full-integration on Chiang's Hiero system.", "cite_spans": [ { "start": 13, "end": 26, "text": "(Chiang, 2007", "ref_id": "BIBREF1" }, { "start": 245, "end": 268, "text": "Huang and Chiang (2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We push the idea behind this method further and make the following contributions in this paper:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We generalize cube pruning and adapt it to two systems very different from Hiero: a phrasebased system similar to Pharaoh (Koehn, 2004) and a tree-to-string system (Huang et al., 2006) .", "cite_spans": [ { "start": 124, "end": 137, "text": "(Koehn, 2004)", "ref_id": "BIBREF9" }, { "start": 166, "end": 186, "text": "(Huang et al., 2006)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We also devise a faster variant of cube pruning, called cube growing, which uses a lazy version of k-best parsing (Huang and Chiang, 2005) that tries to reduce k to the minimum needed at each node to obtain the desired number of hypotheses at the root.", "cite_spans": [ { "start": 116, "end": 140, "text": "(Huang and Chiang, 2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Cube pruning and cube growing are collectively called forest rescoring since they both approximately rescore the packed forest of derivations from \u2212LM decoding. In practice they run an order of magnitude faster than full-integration with beam search, at the same level of search errors and translation accuracy as measured by BLEU.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We establish in this section a unified framework for translation with an integrated n-gram language model in both phrase-based systems and syntaxbased systems based on synchronous context-free grammars (SCFGs). An SCFG (Lewis and Stearns, 1968 ) is a context-free rewriting system for generating string pairs. Each rule A \u2192 \u03b1, \u03b2 rewrites a pair of nonterminals in both languages, where \u03b1 and \u03b2 are the source and target side components, and there is a one-to-one correspondence between the nonterminal occurrences in \u03b1 and the nonterminal occurrences in \u03b2. For example, the following rule VP \u2192 PP (1) VP (2) , VP (2) PP (1) captures the swapping of VP and PP between Chinese (source) and English (target).", "cite_spans": [ { "start": 219, "end": 243, "text": "(Lewis and Stearns, 1968", "ref_id": "BIBREF10" }, { "start": 620, "end": 623, "text": "(1)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Preliminaries", "sec_num": "2" }, { "text": "We will use the following example from Chinese to English for both systems described in this section:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation as Deduction", "sec_num": "2.1" }, { "text": "y\u01d4 with Sh\u0101l\u00f3ng Sharon j\u01d4x\u00edng hold le [past]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation as Deduction", "sec_num": "2.1" }, { "text": "hu\u00ect\u00e1n meeting 'held a meeting with Sharon' A typical phrase-based decoder generates partial target-language outputs in left-to-right order in the form of hypotheses (Koehn, 2004) . Each hypothesis has a coverage vector capturing the source-language words translated so far, and can be extended into a longer hypothesis by a phrase-pair translating an uncovered segment.", "cite_spans": [ { "start": 166, "end": 179, "text": "(Koehn, 2004)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Translation as Deduction", "sec_num": "2.1" }, { "text": "This process can be formalized as a deductive system. For example, the following deduction step grows a hypothesis by the phrase-pair y\u01d4 Sh\u0101l\u00f3ng, with Sharon :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation as Deduction", "sec_num": "2.1" }, { "text": "( \u2022\u2022\u2022) : (w, \"held a talk\") (\u2022\u2022\u2022\u2022\u2022) : (w + c, \"held a talk with Sharon\") (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation as Deduction", "sec_num": "2.1" }, { "text": "where a \u2022 in the coverage vector indicates the source word at this position is \"covered\" (for simplicity we omit here the ending position of the last phrase which is needed for distortion costs), and where w and w + c are the weights of the two hypotheses, respectively, with c being the cost of the phrase-pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation as Deduction", "sec_num": "2.1" }, { "text": "Similarly, the decoding problem with SCFGs can also be cast as a deductive (parsing) system (Shieber et al., 1995) . Basically, we parse the input string using the source projection of the SCFG while building the corresponding subtranslations in parallel. A possible deduction of the above example is notated:", "cite_spans": [ { "start": 92, "end": 114, "text": "(Shieber et al., 1995)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Translation as Deduction", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(PP 1,3 ) : (w 1 , t 1 ) (VP 3,6 ) : (w 2 , t 2 ) (VP 1,6 ) : (w 1 + w 2 + c \u2032 , t 2 t 1 )", "eq_num": "(2)" } ], "section": "Translation as Deduction", "sec_num": "2.1" }, { "text": "where the subscripts denote indices in the input sentence just as in CKY parsing, w 1 , w 2 are the scores of the two antecedent items, and t 1 and t 2 are the corresponding subtranslations. The resulting translation t 2 t 1 is the inverted concatenation as specified by the target-side of the SCFG rule with the additional cost c \u2032 being the cost of this rule. These two deductive systems represent the search space of decoding without a language model. When one is instantiated for a particular input string, it defines a set of derivations, called a forest, represented in a compact structure that has a structure of a graph in the phrase-based case, or more generally, a hypergraph in both cases. Accordingly we call items like (\u2022\u2022\u2022\u2022\u2022) and (VP 1,6 ) nodes in the forest, and instantiated deductions like", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation as Deduction", "sec_num": "2.1" }, { "text": "(\u2022\u2022\u2022\u2022\u2022) \u2192 ( \u2022\u2022\u2022) with Sharon, (VP 1,6 ) \u2192 (VP 3,6 ) (PP 1,3 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation as Deduction", "sec_num": "2.1" }, { "text": "we call hyperedges that connect one or more antecedent nodes to a consequent node.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation as Deduction", "sec_num": "2.1" }, { "text": "To integrate with a bigram language model, we can use the dynamic-programming algorithms of Och and Ney (2004) and Wu (1996) for phrase-based and SCFG-based systems, respectively, which we may think of as doing a finer-grained version of the deductions above. Each node v in the forest will be split into a set of augmented items, which we call +LM items. For phrase-based decoding, a +LM item has the form (v a ) where a is the last word of the hypothesis. Thus a +LM version of Deduction (1) might be: where the score of the resulting +LM item", "cite_spans": [ { "start": 92, "end": 110, "text": "Och and Ney (2004)", "ref_id": "BIBREF12" }, { "start": 115, "end": 124, "text": "Wu (1996)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "2.2" }, { "text": "( \u2022\u2022\u2022 talk ) : (w, \"held a talk\") (\u2022\u2022\u2022\u2022\u2022 Sharon ) : (w \u2032 , \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "2.2" }, { "text": "(VP held \u22c6 meeting 3,6 ) (VP held \u22c6 talk 3,6 ) (VP hold \u22c6 conference 3,6 ) ( P P w i t h \u22c6 S h a r o n 1 , 3 ) ( P P a l o n g \u22c6 S h a r o n 1 , 3 ) ( P P w i t h \u22c6 S h a l o n g 1 , 3 ) 1.0 4.0 7.0 ( P P w i t h \u22c6 S h a r o n 1 , 3 ) ( P P a l o n g \u22c6 S h a r o n 1 , 3 ) ( P P w i t h \u22c6 S h a l o n g 1 , 3 ) 2.5 2.4 8.3 ( P P w i t h \u22c6 S h a r o n 1 , 3 ) ( P P a l o n g \u22c6 S h a r o n 1 , 3 ) ( P P w i t h \u22c6 S h a l o n g 1 , 3 ) 1.0 4.0 7.0 2.5 2.4 8.3 9.5 9.2 ( P P w i t h \u22c6 S h a r o n 1 , 3 ) ( P P a l o n g \u22c6 S h a r o n 1 , 3 ) ( P P w i t h \u22c6 S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "2.2" }, { "text": "w \u2032 = w + c \u2212 log P lm (with | talk)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "2.2" }, { "text": "now includes a combination cost due to the bigrams formed when applying the phrase-pair. Similarly, a +LM item in SCFG-based models has the form (v a\u22c6b ), where a and b are boundary words of the hypothesis string, and \u22c6 is a placeholder symbol for an elided part of that string, indicating that a possible translation of the part of the input spanned by v starts with a and ends with b. An example +LM version of Deduction (2) is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "2.2" }, { "text": "(PP with \u22c6 Sharon 1,3 ): (w 1 , t 1 ) (VP held \u22c6 talk 3,6 ): (w 2 , t 2 ) (VP held \u22c6 Sharon 1,6 ): (w, t 2 t 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "2.2" }, { "text": "where w = w 1 + w 2 + c \u2032 \u2212 log P lm (with | talk) with a similar combination cost formed in combining adjacent boundary words of antecedents. This scheme can be easily extended to work with a general ngram model (Chiang, 2007) . The experiments in this paper use trigram models.", "cite_spans": [ { "start": 213, "end": 227, "text": "(Chiang, 2007)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "2.2" }, { "text": "The conventional full-integration approach traverses the forest bottom-up and explores all possible +LM deductions along each hyperedge. The theoretical running time of this algorithm is O(|F ||T | (m\u22121) ) for phrase-based models, and O(|F ||T | 4(m\u22121) ) for binary-branching SCFG-based models, where |F | is the size of the forest, and |T | is the number of possible target-side words. Even if we assume a constant number of translations for each word in the input, with a trigram model, this still amounts to O(n 11 ) for SCFG-based models and O(2 n n 2 ) for phrase-based models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding a Language Model", "sec_num": "2.2" }, { "text": "Cube pruning (Chiang, 2007) reduces the search space significantly based on the observation that when the above method is combined with beam search, only a small fraction of the possible +LM items at a node will escape being pruned, and moreover we can select with reasonable accuracy those top-k items without computing all possible items first. In a nutshell, cube pruning works on the \u2212LM forest, keeping at most k +LM items at each node, and uses the k-best parsing Algorithm 2 of Huang and Chiang (2005) to speed up the computation. For simplicity of presentation, we will use concrete SCFG-based examples, but the method applies to the general hypergraph framework in Section 2.", "cite_spans": [ { "start": 13, "end": 27, "text": "(Chiang, 2007)", "ref_id": "BIBREF1" }, { "start": 485, "end": 508, "text": "Huang and Chiang (2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Cube Pruning", "sec_num": "3" }, { "text": "Consider Figure 1 (a). Here k = 3 and we use D(v) to denote the top-k +LM items (in sorted order) of node v. Suppose we have computed D(u 1 ) and D(u 2 ) for the two antecedent nodes u 1 = (VP 3,6 ) and u 2 = (PP 1,3 ) respectively. Then for the consequent node v = (VP 1,6 ) we just need to derive the top-3 from the 9 combinations of (D", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 17, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Cube Pruning", "sec_num": "3" }, { "text": "i (u 1 ), D j (u 2 )) with i, j \u2208 [1, 3].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cube Pruning", "sec_num": "3" }, { "text": "Since the antecedent items are sorted, it is very likely that the best consequent items in this grid lie towards the upper-left corner. This situation is very similar to kbest parsing and we can adapt the Algorithm 2 of Huang and Chiang (2005) here to explore this grid in a best-first order.", "cite_spans": [ { "start": 220, "end": 243, "text": "Huang and Chiang (2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Cube Pruning", "sec_num": "3" }, { "text": "Suppose that the combination costs are negligible, and therefore the weight of a consequent item is just the product of the weights of the antecedent items.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cube Pruning", "sec_num": "3" }, { "text": "\u22b2 the input is a forest F 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1: function CUBE(F )", "sec_num": null }, { "text": "for v \u2208 F in (bottom-up) topological order do 3: KBEST(v) 4: return D1(TOP) 5: procedure KBEST(v) 6:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1: function CUBE(F )", "sec_num": null }, { "text": "cand \u2190 { e, 1 | e \u2208 IN (v)} \u22b2 for each incoming e 7: HEAPIFY(cand ) \u22b2 a priority queue of candidates 8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1: function CUBE(F )", "sec_num": null }, { "text": "buf \u2190 \u2205 9:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1: function CUBE(F )", "sec_num": null }, { "text": "while |cand | > 0 and |buf | < k do 10:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1: function CUBE(F )", "sec_num": null }, { "text": "item \u2190 POP-MIN(cand ) 11:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1: function CUBE(F )", "sec_num": null }, { "text": "append item to buf 12: PUSHSUCC(item, cand ) 13: sort buf to D(v) 14: procedure PUSHSUCC( e, j , cand ) 15:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1: function CUBE(F )", "sec_num": null }, { "text": "e is v \u2192 u1 . . . u |e| 16:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1: function CUBE(F )", "sec_num": null }, { "text": "for i in 1 . . . |e| do 17:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1: function CUBE(F )", "sec_num": null }, { "text": "j \u2032 \u2190 j + b i 18: if |D(ui)| \u2265 j \u2032 i then 19:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1: function CUBE(F )", "sec_num": null }, { "text": "PUSH( e, j \u2032 , cand ) Figure 2 : Pseudocode for cube pruning.", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 30, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "1: function CUBE(F )", "sec_num": null }, { "text": "Then we know that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1: function CUBE(F )", "sec_num": null }, { "text": "D 1 (v) = (D 1 (u 1 ), D 1 (u 2 )), the upper-left corner of the grid. Moreover, we know that D 2 (v) is the better of (D 1 (u 1 ), D 2 (u 2 )) and (D 2 (u 1 ), D 1 (u 2 ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1: function CUBE(F )", "sec_num": null }, { "text": ", the two neighbors of the upper-left corner. We continue in this way (see Figure 2 ) for the next-best item. However, when we take into account the combination costs, this grid is no longer monotonic in general, and the above algorithm will not always enumerate items in best-first order. We can see this in the first iteration in Figure 1(b) , where an item with score 2.5 has been enumerated even though there is an item with score 2.4 still to come. Thus we risk making more search errors than the full-integration method, but in practice the loss is much less significant than the speedup. Because of this disordering, we do not put the enumerated items directly into D(v); instead, we collect items in a buffer (buf in Figure 2 ) and re-sort the buffer into D(v) after it has accumulated k items. 1 In general the grammar may have multiple rules that share the same source side but have different target sides, which we have treated here as separate method k-best +LM rescoring. . . rescoring Alg. 3 only at the root node cube pruning Alg. 2 on-the-fly at each node cube growing Alg. 3 on-the-fly at each node hyperedges in the \u2212LM forest. In Hiero, these hyperedges are processed as a single unit which we call a hyperedge bundle. The different target sides then constitute a third dimension of the grid, forming a cube of possible combinations (Chiang, 2007) .", "cite_spans": [ { "start": 803, "end": 804, "text": "1", "ref_id": null }, { "start": 1352, "end": 1366, "text": "(Chiang, 2007)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 75, "end": 83, "text": "Figure 2", "ref_id": null }, { "start": 332, "end": 343, "text": "Figure 1(b)", "ref_id": "FIGREF1" }, { "start": 725, "end": 733, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "1: function CUBE(F )", "sec_num": null }, { "text": "Now consider that there are many hyperedges that derive v, and we are only interested the top +LM items of v over all incoming hyperedges. Following Algorithm 2, we initialize the priority queue cand with the upper-left corner item from each hyperedge, and proceed as above. See Figure 2 for the pseudocode for cube pruning. We use the notation e, j to identify the derivation of v via the hyperedge e and the j i th best subderivation of antecedent u i (1 \u2264 i \u2264 |j|). Also, we let 1 stand for a vector whose elements are all 1, and b i for the vector whose members are all 0 except for the ith whose value is 1 (the dimensionality of either should be evident from the context). The heart of the algorithm is lines 10-12. Lines 10-11 move the best derivation e, j from cand to buf , and then line 12 pushes its successors { e, j + b i | i \u2208 1 . . . |e|} into cand .", "cite_spans": [], "ref_spans": [ { "start": 279, "end": 287, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "1: function CUBE(F )", "sec_num": null }, { "text": "Although much faster than full-integration, cube pruning still computes a fixed amount of +LM items at each node, many of which will not be useful for arriving at the 1-best hypothesis at the root. It would be more efficient to compute as few +LM items at each node as are needed to obtain the 1-best hypothesis at the root. This new method, called cube growing, is a lazy version of cube pruning just as Algorithm 3 of Huang and Chiang (2005) , is a lazy version of Algorithm 2 (see Table 1 ).", "cite_spans": [ { "start": 420, "end": 443, "text": "Huang and Chiang (2005)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 484, "end": 491, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Cube Growing", "sec_num": "4" }, { "text": "Instead of traversing the forest bottom-up, cube growing visits nodes recursively in depth-first order from the root node (Figure 4 ). First we call LAZYJTHBEST(TOP, 1), which uses the same algorithm as cube pruning to find the 1-best +LM item of the root node using the best +LM items of 147 Figure 1(a) , assuming h combo (e) = 0.1 for this hyperedge; (b) cube growing prevents early ranking of the top-left cell (2.5) as the best item in this grid.", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 131, "text": "(Figure 4", "ref_id": "FIGREF4" }, { "start": 293, "end": 304, "text": "Figure 1(a)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Cube Growing", "sec_num": "4" }, { "text": "the antecedent nodes. However, in this case the best +LM items of the antecedent nodes are not known, because we have not visited them yet. So we recursively invoke LAZYJTHBEST on the antecedent nodes to obtain them as needed. Each invocation of LAZYJTHBEST(v, j) will recursively call itself on the antecedents of v until it is confident that the jth best +LM item for node v has been found.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cube Growing", "sec_num": "4" }, { "text": "Consider again the case of one hyperedge e. Because of the nonmonotonicity caused by combination costs, the first +LM item ( e, 1 ) popped from cand is not guaranteed to be the best of all combinations along this hyperedge (for example, the top-left cell of 2.5 in Figure 1 is not the best in the grid). So we cannot simply enumerate items just as they come off of cand . 2 Instead, we need to store up popped items in a buffer buf , just as in cube pruning, and enumerate an item only when we are confident that it will never be surpassed in the future. In other words, we would like to have an estimate of the best item not explored yet (analogous to the heuristic function in A* search). If we can establish a lower bound h combo (e) on the combination cost of any +LM deduction via hyperedge e, then we can form a monotonic grid (see Figure 3(a) ) of lower bounds on the grid of combinations, by using h combo (e) in place of the true combination cost for each +LM item x in the grid; call this lower bound h(x).", "cite_spans": [], "ref_spans": [ { "start": 265, "end": 273, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 838, "end": 849, "text": "Figure 3(a)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Cube Growing", "sec_num": "4" }, { "text": "Now suppose that the gray-shaded cells in Figure 3(a) are the members of cand . Then the minimum of h(x) over the items in cand , in this ex-1:", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 53, "text": "Figure 3(a)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Cube Growing", "sec_num": "4" }, { "text": "procedure LAZYJTHBEST(v, j) 2: if cand [v]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cube Growing", "sec_num": "4" }, { "text": "is undefined then 3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cube Growing", "sec_num": "4" }, { "text": "cand [v] \u2190 \u2205 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cube Growing", "sec_num": "4" }, { "text": "FIRE(e, 1, cand ) foreach e \u2208 IN (v) 5: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cube Growing", "sec_num": "4" }, { "text": "buf [v] \u2190 \u2205 6: while |D(v)| < j and |buf [v]| + |D(v)| < k and |cand [v]| > 0 do 7: item \u2190 POP-MIN(cand [v]) 8: PUSH(item, buf [v]) 9: PUSHSUCC(item, cand [v]) 10: bound \u2190 min{h(x) | x \u2208 cand [v]} 11: ENUM(buf [v], D(v), bound ) 12: ENUM(buf [v], D(v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cube Growing", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "bound = min x\u2208cand[v] h(x)", "eq_num": "(3)" } ], "section": "Cube Growing", "sec_num": "4" }, { "text": "is a lower bound on the true cost of any future item that is yet to be explored for v.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cube Growing", "sec_num": "4" }, { "text": "Proof. For any item x that is not explored yet, the true cost c(x) \u2265 h(x), by the definition of h. And there exists an item y \u2208 cand[v] along the same hyperedge such that h(x) \u2265 h(y), due to the monotonicity of h within the grid along one hyperedge. We also have h(y) \u2265 bound by the definition of bound. Therefore c(x) \u2265 bound .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cube Growing", "sec_num": "4" }, { "text": "Now we can safely pop the best item from buf if its true cost MIN(buf ) is better than bound and pass it up to the consequent node (lines 21-23); but otherwise, we have to wait for more items to accumulate in buf to prevent a potential search error, for example, in the case of Figure 3(b) , where the top-left cell 148 (2.5) is worse than the current bound of 2.2. The update of bound in each iteration (line 10) can be efficiently implemented by using another heap with the same contents as cand but prioritized by h instead.", "cite_spans": [], "ref_spans": [ { "start": 278, "end": 289, "text": "Figure 3(b)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Cube Growing", "sec_num": "4" }, { "text": "In practice this is a negligible overhead on top of cube pruning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cube Growing", "sec_num": "4" }, { "text": "We now turn to the problem of estimating the heuristic function h combo . In practice, computing true lower bounds of the combination costs is too slow and would compromise the speed up gained from cube growing. So we instead use a much simpler method that just calculates the minimum combination cost of each hyperedge in the top-i derivations of the root node in \u2212LM decoding. This is just an approximation of the true lower bound, and bad estimates can lead to search errors. However, the hope is that by choosing the right value of i, these estimates will be accurate enough to affect the search quality only slightly, which is analogous to \"almost admissible\" heuristics in A* search (Soricut, 2006) .", "cite_spans": [ { "start": 689, "end": 704, "text": "(Soricut, 2006)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Cube Growing", "sec_num": "4" }, { "text": "We test our methods on two large-scale English-to-Chinese translation systems: a phrase-based system and our tree-to-string system (Huang et al., 2006) . ... Figure 6 : A hyperedge bundle represents all +LM deductions that derives an item in the current bin from the same coverage vector (see Figure 5 ). The phrases on the top denote the target-sides of applicable phrase-pairs sharing the same source-side.", "cite_spans": [ { "start": 131, "end": 151, "text": "(Huang et al., 2006)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 158, "end": 166, "text": "Figure 6", "ref_id": null }, { "start": 293, "end": 301, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We implemented Cubit, a Python clone of the Pharaoh decoder (Koehn, 2004) , 3 and adapted cube pruning to it as follows. As in Pharaoh, each bin i contains hypotheses (i.e., +LM items) covering i words on the source-side. But at each bin (see Figure 5) , all +LM items from previous bins are first partitioned into \u2212LM items; then the hyperedges leading from those \u2212LM items are further grouped into hyperedge bundles ( Figure 6 ), which are placed into the priority queue of the current bin. Our data preparation follows Huang et al. (2006) : the training data is a parallel corpus of 28.3M words on the English side, and a trigram language model is trained on the Chinese side. We use the same test set as (Huang et al., 2006) , which is a 140-sentence subset of the NIST 2003 test set with 9-36 words on the English side. The weights for the log-linear model are tuned on a separate development set. We set the decoder phrase-table limit to 100 as suggested in (Koehn, 2004) and the distortion limit to 4. Figure 7 (a) compares cube pruning against fullintegration in terms of search quality vs. search efficiency, under various pruning settings (threshold beam set to 0.0001, stack size varying from 1 to 200). Search quality is measured by average model cost per sentence (lower is better), and search efficiency is measured by the average number of hypotheses generated (smaller is faster). At each level of search quality, the speed-up is always better than a factor of 10. The speed-up at the lowest searcherror level is a factor of 32. Figure 7 (b) makes a similar comparison but measures search quality by BLEU, which shows an even larger relative speed-up for a given BLEU score, because translations with very different model costs might have similar BLEU scores. It also shows that our full-integration implementation in Cubit faithfully reproduces Pharaoh's performance. Fixing the stack size to 100 and varying the threshold yielded a similar result.", "cite_spans": [ { "start": 60, "end": 73, "text": "(Koehn, 2004)", "ref_id": "BIBREF9" }, { "start": 76, "end": 77, "text": "3", "ref_id": null }, { "start": 243, "end": 252, "text": "Figure 5)", "ref_id": null }, { "start": 522, "end": 541, "text": "Huang et al. (2006)", "ref_id": "BIBREF5" }, { "start": 708, "end": 728, "text": "(Huang et al., 2006)", "ref_id": "BIBREF5" }, { "start": 964, "end": 977, "text": "(Koehn, 2004)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 420, "end": 428, "text": "Figure 6", "ref_id": null }, { "start": 1009, "end": 1017, "text": "Figure 7", "ref_id": "FIGREF7" }, { "start": 1545, "end": 1553, "text": "Figure 7", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Phrase-based Decoding", "sec_num": "5.1" }, { "text": "In tree-to-string (also called syntax-directed) decoding (Huang et al., 2006; Liu et al., 2006) , the source string is first parsed into a tree, which is then recursively converted into a target string according to transfer rules in a synchronous grammar (Galley et al., 2006) . For instance, the following rule translates an English passive construction into Chinese:", "cite_spans": [ { "start": 57, "end": 77, "text": "(Huang et al., 2006;", "ref_id": "BIBREF5" }, { "start": 78, "end": 95, "text": "Liu et al., 2006)", "ref_id": "BIBREF11" }, { "start": 255, "end": 276, "text": "(Galley et al., 2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Tree-to-string Decoding", "sec_num": "5.2" }, { "text": "VP VBD was VP-C x 1 :VBN PP IN by x 2 :NP-C \u2192 b\u00e8i x 2 x 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree-to-string Decoding", "sec_num": "5.2" }, { "text": "Our tree-to-string system performs slightly better than the state-of-the-art phrase-based system Pharaoh on the above data set. Although different from the SCFG-based systems in Section 2, its derivation trees remain context-free and the search space is still a hypergraph, where we can adapt the methods presented in Sections 3 and 4. The data set is same as in Section 5.1, except that we also parsed the English-side using a variant of the Collins (1997) parser, and then extracted 24.7M tree-to-string rules using the algorithm of (Galley et al., 2006) . Since our tree-to-string rules may have many variables, we first binarize each hyperedge in the forest on the target projection (Huang, 2007) . All the three +LM decoding methods to be compared below take these binarized forests as input. For cube growing, we use a non-duplicate k-best method (Huang et al., 2006) to get 100-best unique translations according to \u2212LM to estimate the lower-bound heuristics. 4 This preprocessing step takes on average 0.12 seconds per sentence, which is negligible in comparison to the +LM decoding time. Figure 8 (a) compares cube growing and cube pruning against full-integration under various beam settings in the same fashion of Figure 7(a) . At the lowest level of search error, the relative speed-up from cube growing and cube pruning compared with full-integration is by a factor of 9.8 and 4.1, respectively. Figure 8(b) is a similar comparison in terms of BLEU scores and shows an even bigger advantage of cube growing and cube pruning over the baseline. ", "cite_spans": [ { "start": 443, "end": 457, "text": "Collins (1997)", "ref_id": "BIBREF2" }, { "start": 535, "end": 556, "text": "(Galley et al., 2006)", "ref_id": "BIBREF3" }, { "start": 687, "end": 700, "text": "(Huang, 2007)", "ref_id": "BIBREF6" }, { "start": 853, "end": 873, "text": "(Huang et al., 2006)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 1097, "end": 1105, "text": "Figure 8", "ref_id": "FIGREF8" }, { "start": 1225, "end": 1236, "text": "Figure 7(a)", "ref_id": "FIGREF7" }, { "start": 1409, "end": 1420, "text": "Figure 8(b)", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Tree-to-string Decoding", "sec_num": "5.2" }, { "text": "We have presented a novel extension of cube pruning called cube growing, and shown how both can be seen as general forest rescoring techniques applicable to both phrase-based and syntax-based decoding. We evaluated these methods on large-scale translation tasks and observed considerable speed improvements, often by more than a factor of ten. We plan to investigate how to adapt cube growing to phrasebased and hierarchical phrase-based systems. These forest rescoring algorithms have potential applications to other computationally intensive tasks involving combinations of different models, for example, head-lexicalized parsing (Collins, 1997) ; joint parsing and semantic role labeling (Sutton and McCallum, 2005) ; or tagging and parsing with nonlocal features. Thus we envision forest rescoring as being of general applicability for reducing complicated search spaces, as an alternative to simulated annealing methods (Kirkpatrick et al., 1983) .", "cite_spans": [ { "start": 632, "end": 647, "text": "(Collins, 1997)", "ref_id": "BIBREF2" }, { "start": 691, "end": 718, "text": "(Sutton and McCallum, 2005)", "ref_id": "BIBREF15" }, { "start": 925, "end": 951, "text": "(Kirkpatrick et al., 1983)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "Notice that different combinations might have the same resulting item, in which case we only keep the one with the better score (sometimes called hypothesis recombination in MT literature), so the number of items in D(v) might be less than k.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "If we did, then the out-of-order enumeration of +LM items at an antecedent node would cause an entire row or column in the grid to be disordered at the consequent node, potentially leading to a multiplication of search errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In our tests, Cubit always obtains a BLEU score within 0.004 of Pharaoh's(Figure 7(b)). Source code available at http://www.cis.upenn.edu/\u02dclhuang3/cubit/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "If a hyperedge is not represented at all in the 100-best \u2212LM derivations at the root node, we use the 1-best \u2212LM derivation of this hyperedge instead. Here, rules that share the same source side but have different target sides are treated as separate hyperedges, not collected into hyperedge bundles, since grouping becomes difficult after binarization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A hierarchical phrase-based model for statistical machine translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. ACL.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Hierarchical phrase-based translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2). To appear.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Three generative lexicalised models for statistical parsing", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1997, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 1997. Three generative lexicalised models for statistical parsing. In Proc. ACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Scalable inference and training of context-rich syntactic translation models", "authors": [ { "first": "M", "middle": [], "last": "Galley", "suffix": "" }, { "first": "J", "middle": [], "last": "Graehl", "suffix": "" }, { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "D", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "S", "middle": [], "last": "Deneefe", "suffix": "" }, { "first": "W", "middle": [], "last": "Wang", "suffix": "" }, { "first": "I", "middle": [], "last": "Thayer", "suffix": "" } ], "year": 2006, "venue": "Proc. COLING-ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Galley, J. Graehl, K. Knight, D. Marcu, S. DeNeefe, W. Wang, and I. Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proc. COLING-ACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Better k-best parsing", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proc. IWPT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Huang and David Chiang. 2005. Better k-best parsing. In Proc. IWPT.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Statistical syntax-directed translation with extended domain of locality", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" }, { "first": "Aravind", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 2006, "venue": "Proc. AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Sta- tistical syntax-directed translation with extended domain of locality. In Proc. AMTA.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Binarization, synchronous binarization, and target-side binarization", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2007, "venue": "Proc. NAACL Workshop on Syntax and Structure in Statistical Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Huang. 2007. Binarization, synchronous binarization, and target-side binarization. In Proc. NAACL Workshop on Syntax and Structure in Statistical Translation.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Optimization by simulated annealing", "authors": [ { "first": "S", "middle": [], "last": "Kirkpatrick", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Gelatt", "suffix": "" }, { "first": "M", "middle": [ "P" ], "last": "Vecchi", "suffix": "" } ], "year": 1983, "venue": "Science", "volume": "220", "issue": "4598", "pages": "671--680", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. 1983. Optimiza- tion by simulated annealing. Science, 220(4598):671-680.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Decoding complexity in wordreplacement translation models", "authors": [ { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 1999, "venue": "Computational Linguistics", "volume": "25", "issue": "4", "pages": "607--615", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Knight. 1999. Decoding complexity in word- replacement translation models. Computational Linguistics, 25(4):607-615.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Pharaoh: a beam search decoder for phrase-based statistical machine translation models", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "Proc. AMTA", "volume": "", "issue": "", "pages": "115--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2004. Pharaoh: a beam search decoder for phrase-based statistical machine translation models. In Proc. AMTA, pages 115-124.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Syntax-directed transduction", "authors": [ { "first": "P", "middle": [ "M" ], "last": "Lewis", "suffix": "" }, { "first": "R", "middle": [ "E" ], "last": "Stearns", "suffix": "" } ], "year": 1968, "venue": "J. ACM", "volume": "15", "issue": "", "pages": "465--488", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. M. Lewis and R. E. Stearns. 1968. Syntax-directed transduc- tion. J. ACM, 15:465-488.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Tree-to-string alignment template for statistical machine translation", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shouxun", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2006, "venue": "Proc. COLING-ACL", "volume": "", "issue": "", "pages": "609--616", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-to-string alignment template for statistical machine translation. In Proc. COLING-ACL, pages 609-616.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The alignment template approach to statistical machine translation. Computational Linguistics", "authors": [ { "first": "Joseph", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "", "volume": "30", "issue": "", "pages": "417--449", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Joseph Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Com- putational Linguistics, 30:417-449.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Principles and implementation of deductive parsing", "authors": [ { "first": "Stuart", "middle": [], "last": "Shieber", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Schabes", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 1995, "venue": "J. Logic Programming", "volume": "24", "issue": "", "pages": "3--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stuart Shieber, Yves Schabes, and Fernando Pereira. 1995. Principles and implementation of deductive parsing. J. Logic Programming, 24:3-36.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Natural Language Generation using an Information-Slim Representation", "authors": [ { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radu Soricut. 2006. Natural Language Generation using an Information-Slim Representation. Ph.D. thesis, University of Southern California.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Joint parsing and semantic role labeling", "authors": [ { "first": "Charles", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2005, "venue": "Proc. CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Sutton and Andrew McCallum. 2005. Joint parsing and semantic role labeling. In Proc. CoNLL 2005.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A polynomial-time algorithm for statistical machine translation", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1996, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekai Wu. 1996. A polynomial-time algorithm for statistical machine translation. In Proc. ACL.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "held a talk with Sharon\")" }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Cube pruning along one hyperedge. (a): the numbers in the grid denote the score of the resulting +LM item, including the combination cost; (b)-(d): the best-first enumeration of the top three items. Notice that the items popped in (b) and (c) are out of order due to the non-monotonicity of the combination cost." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "Figure 1(b)-(d)), enumerating the consequent items best-first while keeping track of a relatively small number of candidates (shaded cells in Figure 1(b), cand in" }, "FIGREF3": { "uris": null, "num": null, "type_str": "figure", "text": "Example of cube growing along one hyperedge. (a): the h(x) scores for the grid in" }, "FIGREF4": { "uris": null, "num": null, "type_str": "figure", "text": "), +\u221e) 13: procedure FIRE(e, j, cand ) 14:e is v \u2192 u1 . . . u |e| 15:for i in 1 . . . |e| do 16:LAZYJTHBEST(ui, ji) 17:if |D(ui)| < ji then return 18:PUSH( e, j , cand ) 19: procedure PUSHSUCC( e, j , cand ) 20:FIRE(e, j + b i , cand ) foreach i in 1 . . .|e| 21: procedure ENUM(buf , D, bound ) 22: while |buf | > 0 and MIN(buf ) < bound do 23: append POP-MIN(buf ) to D Pseudocode of cube growing. ample, min{2.2, 5.1} = 2.2 is a lower bound on the cost of any item in the future for the hyperedge e. Indeed, if cand contains items from multiple hyperedges for a single consequent node, this is still a valid lower bound. More formally: Lemma 1. For each node v in the forest, the term" }, "FIGREF5": { "uris": null, "num": null, "type_str": "figure", "text": "(a) Pharaoh expands the hypotheses in the current bin (#2) into longer ones. (b) In Cubit, hypotheses in previous bins are fed via hyperedge bundles (solid arrows) into a priority queue (shaded triangle), which empties into the current bin (#5)." }, "FIGREF7": { "uris": null, "num": null, "type_str": "figure", "text": "Cube pruning vs. full-integration (with beam search) on phrase-based decoding." }, "FIGREF8": { "uris": null, "num": null, "type_str": "figure", "text": "Cube growing vs. cube pruning vs. full-integration (with beam search) on tree-to-string decoding." }, "TABREF0": { "html": null, "text": "Comparison of the three methods.", "num": null, "content": "", "type_str": "table" } } } }