ACL-OCL / Base_JSON /prefixP /json /P08 /P08-1023.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P08-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:35:10.143110Z"
},
"title": "Forest-Based Translation",
"authors": [
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": "",
"affiliation": {
"laboratory": "Key Lab. of Intelligent Information Processing",
"institution": "",
"location": {}
},
"email": "htmi@ict.ac.cn"
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania Chinese Academy of Sciences Levine Hall",
"location": {
"addrLine": "3330 Walnut Street",
"postBox": "P.O. Box 2704",
"postCode": "100190, 19104",
"settlement": "Beijing",
"region": "PA",
"country": "China Philadelphia, USA"
}
},
"email": "lhuang3@cis.upenn.edu"
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "Key Lab. of Intelligent Information Processing",
"institution": "",
"location": {}
},
"email": "liuqun@ict.ac.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Among syntax-based translation models, the tree-based approach, which takes as input a parse tree of the source sentence, is a promising direction being faster and simpler than its string-based counterpart. However, current tree-based systems suffer from a major drawback: they only use the 1-best parse to direct the translation, which potentially introduces translation mistakes due to parsing errors. We propose a forest-based approach that translates a packed forest of exponentially many parses, which encodes many more alternatives than standard n-best lists. Large-scale experiments show an absolute improvement of 1.7 BLEU points over the 1-best baseline. This result is also 0.8 points higher than decoding with 30-best parses, and takes even less time.",
"pdf_parse": {
"paper_id": "P08-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "Among syntax-based translation models, the tree-based approach, which takes as input a parse tree of the source sentence, is a promising direction being faster and simpler than its string-based counterpart. However, current tree-based systems suffer from a major drawback: they only use the 1-best parse to direct the translation, which potentially introduces translation mistakes due to parsing errors. We propose a forest-based approach that translates a packed forest of exponentially many parses, which encodes many more alternatives than standard n-best lists. Large-scale experiments show an absolute improvement of 1.7 BLEU points over the 1-best baseline. This result is also 0.8 points higher than decoding with 30-best parses, and takes even less time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Syntax-based machine translation has witnessed promising improvements in recent years. Depending on the type of input, these efforts can be divided into two broad categories: the string-based systems whose input is a string to be simultaneously parsed and translated by a synchronous grammar (Wu, 1997; Chiang, 2005; Galley et al., 2006) , and the tree-based systems whose input is already a parse tree to be directly converted into a target tree or string (Lin, 2004; Ding and Palmer, 2005; Quirk et al., 2005; Liu et al., 2006; . Compared with their string-based counterparts, treebased systems offer some attractive features: they are much faster in decoding (linear time vs. cubic time, see ), do not require a binary-branching grammar as in string-based models (Zhang et al., 2006) , and can have separate grammars for parsing and translation, say, a context-free grammar for the former and a tree substitution grammar for the latter . However, despite these advantages, current tree-based systems suffer from a major drawback: they only use the 1best parse tree to direct the translation, which potentially introduces translation mistakes due to parsing errors (Quirk and Corston-Oliver, 2006) . This situation becomes worse with resource-poor source languages without enough Treebank data to train a high-accuracy parser.",
"cite_spans": [
{
"start": 292,
"end": 302,
"text": "(Wu, 1997;",
"ref_id": "BIBREF23"
},
{
"start": 303,
"end": 316,
"text": "Chiang, 2005;",
"ref_id": "BIBREF2"
},
{
"start": 317,
"end": 337,
"text": "Galley et al., 2006)",
"ref_id": "BIBREF7"
},
{
"start": 457,
"end": 468,
"text": "(Lin, 2004;",
"ref_id": "BIBREF15"
},
{
"start": 469,
"end": 491,
"text": "Ding and Palmer, 2005;",
"ref_id": "BIBREF5"
},
{
"start": 492,
"end": 511,
"text": "Quirk et al., 2005;",
"ref_id": "BIBREF21"
},
{
"start": 512,
"end": 529,
"text": "Liu et al., 2006;",
"ref_id": "BIBREF16"
},
{
"start": 766,
"end": 786,
"text": "(Zhang et al., 2006)",
"ref_id": "BIBREF25"
},
{
"start": 1167,
"end": 1199,
"text": "(Quirk and Corston-Oliver, 2006)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One obvious solution to this problem is to take as input k-best parses, instead of a single tree. This kbest list postpones some disambiguation to the decoder, which may recover from parsing errors by getting a better translation from a non 1-best parse. However, a k-best list, with its limited scope, often has too few variations and too many redundancies; for example, a 50-best list typically encodes a combination of 5 or 6 binary ambiguities (since 2 5 < 50 < 2 6 ), and many subtrees are repeated across different parses (Huang, 2008) . It is thus inefficient either to decode separately with each of these very similar trees. Longer sentences will also aggravate this situation as the number of parses grows exponentially with the sentence length.",
"cite_spans": [
{
"start": 528,
"end": 541,
"text": "(Huang, 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We instead propose a new approach, forest-based translation (Section 3), where the decoder translates a packed forest of exponentially many parses, 1 which compactly encodes many more alternatives than k-best parses. This scheme can be seen as a compromise between the string-based and treebased methods, while combining the advantages of both: decoding is still fast, yet does not commit to a single parse. Large-scale experiments (Section 4) show an improvement of 1.7 BLEU points over the 1-best baseline, which is also 0.8 points higher than decoding with 30-best trees, and takes even less time thanks to the sharing of common subtrees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Current tree-based systems perform translation in two separate steps: parsing and decoding. A parser first parses the source language input into a 1-best tree T , and the decoder then searches for the best derivation (a sequence of translation steps) d * that converts source tree T into a target-language string among all possible derivations D:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-based systems",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d * = arg max d\u2208D P(d|T ).",
"eq_num": "(1)"
}
],
"section": "Tree-based systems",
"sec_num": "2"
},
{
"text": "We will now proceed with a running example translating from Chinese to English:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-based systems",
"sec_num": "2"
},
{
"text": "(2) \u00a3\u00fc B\u00f9sh\u00ed Bush AE y\u01d4 with/and \u00e9\u00e9 Sh\u0101l\u00f3ng Sharon 1 \u00c4 j\u01d4x\u00edng hold le pass. \u1e27 u\u00ect\u00e1n talk 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-based systems",
"sec_num": "2"
},
{
"text": "\"Bush held a talk 2 with Sharon 1 \" Figure 2 shows how this process works. The Chinese sentence (a) is first parsed into tree (b), which will be converted into an English string in 5 steps. First, at the root node, we apply rule r 1 preserving top-level word-order between English and Chinese, (Liu et al., 2007) was a misnomer which actually refers to a set of several unrelated subtrees over disjoint spans, and should not be confused with the standard concept of packed forest. which results in two unfinished subtrees in (c). Then rule r 2 grabs the B\u00f9sh\u00ed subtree and transliterate it (r 2 ) NPB(NR(B\u00f9sh\u00ed)) \u2192 Bush.",
"cite_spans": [
{
"start": 294,
"end": 312,
"text": "(Liu et al., 2007)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 36,
"end": 44,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Tree-based systems",
"sec_num": "2"
},
{
"text": "(r 1 ) IP(x 1 :NPB x 2 :VP) \u2192 x 1 x 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-based systems",
"sec_num": "2"
},
{
"text": "Similarly, rule r 3 shown in Figure 1 is applied to the VP subtree, which swaps the two NPBs, yielding the situation in (d). This rule is particularly interesting since it has multiple levels on the source side, which has more expressive power than synchronous context-free grammars where rules are flat.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Tree-based systems",
"sec_num": "2"
},
{
"text": "More formally, a (tree-to-string) translation rule ) is a tuple t, s, \u03c6 , where t is the source-side tree, whose internal nodes are labeled by nonterminal symbols in N , and whose frontier nodes are labeled by source-side terminals in \u03a3 or variables from a set X = {x 1 , x 2 , . . .}; s \u2208 (X \u222a \u2206) * is the target-side string where \u2206 is the target language terminal set; and \u03c6 is a mapping from X to nonterminals in N . Each variable x i \u2208 X occurs exactly once in t and exactly once in s. We denote R to be the translation rule set. A similar formalism appears in another form in (Liu et al., 2006) . These rules are in the reverse direction of the original string-to-tree transducer rules defined by Galley et al. (2004) .",
"cite_spans": [
{
"start": 581,
"end": 599,
"text": "(Liu et al., 2006)",
"ref_id": "BIBREF16"
},
{
"start": 702,
"end": 722,
"text": "Galley et al. (2004)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-based systems",
"sec_num": "2"
},
{
"text": "Finally, from step (d) we apply rules r 4 and r 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-based systems",
"sec_num": "2"
},
{
"text": "(r 4 ) NPB(NN(hu\u00ect\u00e1n)) \u2192 a talk (r 5 ) NPB(NR(Sh\u0101l\u00f3ng)) \u2192 Sharon",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-based systems",
"sec_num": "2"
},
{
"text": "which perform phrasal translations for the two remaining subtrees, respectively, and get the Chinese translation in (e).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-based systems",
"sec_num": "2"
},
{
"text": "We now extend the tree-based idea from the previous section to the case of forest-based translation. Again, there are two steps, parsing and decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forest-based translation",
"sec_num": "3"
},
{
"text": "In the former, a (modified) parser will parse the input sentence and output a packed forest (Section 3.1) rather than just the 1-best tree. Such a forest is usually huge in size, so we use the forest pruning algorithm (Section 3.4) to reduce it to a reasonable size. The pruned parse forest will then be used to direct the translation. In the decoding step, we first convert the parse forest into a translation forest using the translation rule set, by similar techniques of pattern-matching from tree-based decoding (Section 3.2). Then the decoder searches for the best derivation on the translation forest and outputs the target string (Section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forest-based translation",
"sec_num": "3"
},
{
"text": "Informally, a packed parse forest, or forest in short, is a compact representation of all the derivations (i.e., parse trees) for a given sentence under a context-free grammar (Billot and Lang, 1989) . For example, consider the Chinese sentence in Example (2) above, which has (at least) two readings depending on the part-of-speech of the word y\u01d4, which can be either a preposition (P \"with\") or a conjunction (CC \"and\"). The parse tree for the preposition case is shown in Figure 2 (b) as the 1-best parse, while for the conjunction case, the two proper nouns (B\u00f9sh\u00ed and Sh\u0101l\u00f3ng) are combined to form a coordinated NP",
"cite_spans": [
{
"start": 176,
"end": 199,
"text": "(Billot and Lang, 1989)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 475,
"end": 483,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Parse Forest",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "NPB 0,1 CC 1,2 NPB 2,3 NP 0,3",
"eq_num": "(*)"
}
],
"section": "Parse Forest",
"sec_num": "3.1"
},
{
"text": "which functions as the subject of the sentence. In this case the Chinese sentence is translated into",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Forest",
"sec_num": "3.1"
},
{
"text": "(3) \" [Bush and Sharon] held a talk\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Forest",
"sec_num": "3.1"
},
{
"text": "Shown in Figure 3 (a), these two parse trees can be represented as a single forest by sharing common subtrees such as NPB 0,1 and VPB 3,6 . Such a forest has a structure of a hypergraph (Klein and Manning, 2001; Huang and Chiang, 2005) , where items like NP 0,3 are called nodes, and deductive steps like (*) correspond to hyperedges.",
"cite_spans": [
{
"start": 186,
"end": 211,
"text": "(Klein and Manning, 2001;",
"ref_id": "BIBREF12"
},
{
"start": 212,
"end": 235,
"text": "Huang and Chiang, 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 9,
"end": 17,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parse Forest",
"sec_num": "3.1"
},
{
"text": "More formally, a forest is a pair V, E , where V is the set of nodes, and E the set of hyperedges. For a given sentence w 1:l = w 1 . . . w l , each node v \u2208 V is in the form of X i,j , which denotes the recognition of nonterminal X spanning the substring from positions i through j (that is, w i+1 . . . w j ). Each hyperedge e \u2208 E is a pair tails(e), head (e) , where head (e) \u2208 V is the consequent node in the deductive step, and tails(e) \u2208 V * is the list of antecedent nodes. For example, the hyperedge for deduction (*) is notated:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Forest",
"sec_num": "3.1"
},
{
"text": "(NPB 0,1 , CC 1,2 , NPB 2,3 ), NP 0,3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Forest",
"sec_num": "3.1"
},
{
"text": "There is also a distinguished root node TOP in each forest, denoting the goal item in parsing, which is simply S 0,l where S is the start symbol and l is the sentence length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Forest",
"sec_num": "3.1"
},
{
"text": "Given a parse forest and a translation rule set R, we can generate a translation forest which has a similar hypergraph structure. Basically, just as the depthfirst traversal procedure in tree-based decoding (Figure 2) , we visit in top-down order each node v in the (a) ",
"cite_spans": [],
"ref_spans": [
{
"start": 207,
"end": 217,
"text": "(Figure 2)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Translation Forest",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "IP 0,6 NP 0,3 NPB 0,1 NR 0,1 B\u00f9sh\u00ed CC 1,2 y\u01d4 VP 1,6 PP 1,3 P 1,2 NPB 2,3 NR 2,3 Sh\u0101l\u00f3ng VPB 3,6 VV 3,4 j\u01d4x\u00edng AS 4,5 le NPB 5,6 NN 5,6 hu\u00ect\u00e1n \u21d3 translation rule set R (b) IP 0,6 NP 0,3 NPB 0,1 CC 1,2 VP 1,6 PP 1,3 P 1,2 NPB 2,3 VPB 3,6",
"eq_num": "VV 3"
}
],
"section": "Translation Forest",
"sec_num": "3.2"
},
{
"text": ":NPB x 2 :VP) \u2192 x 1 x 2 e 2 r 6 IP(x 1 :NP x 2 :VPB) \u2192 x 1 x 2 e 3 r 3 VP(PP(P(y\u01d4) x 1 :NPB) VPB(VV(j\u01d4x\u00edng) AS(le) x 2 :NPB)) \u2192 held x 2 with x 1 e 4 r 7 VP(PP(P(y\u01d4) x 1 :NPB) x 2 :VPB) \u2192 x 2 with x 1 e 5 r 8 NP(x 1 :NPB CC(y\u01d4) x 2 :NPB) \u2192 x 1 and x 2 e 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Forest",
"sec_num": "3.2"
},
{
"text": "r 9 VPB(VV(j\u01d4x\u00edng) AS(le) x 1 :NPB) \u2192 held x 1 Figure 3 : (a) the parse forest of the example sentence; solid hyperedges denote the 1-best parse in Figure 2 (b) while dashed hyperedges denote the alternative parse due to Deduction (*). (b) the corresponding translation forest after applying the translation rules (lexical rules not shown); the derivation shown in bold solid lines (e 1 and e 3 ) corresponds to the derivation in Figure 2 ; the one shown in dashed lines (e 2 , e 5 , and e 6 ) uses the alternative parse and corresponds to the translation in Example (3). (c) the correspondence between translation hyperedges and translation rules.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 55,
"text": "Figure 3",
"ref_id": null
},
{
"start": 148,
"end": 156,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 430,
"end": 438,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Translation Forest",
"sec_num": "3.2"
},
{
"text": "parse forest, and try to pattern-match each translation rule r against the local sub-forest under node v. For example, in Figure 3 (a), at node VP 1,6 , two rules r 3 and r 7 both matches the local subforest, and will thus generate two translation hyperedges e 3 and e 4 (see Figure 3(b-c) ).",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 130,
"text": "Figure 3",
"ref_id": null
},
{
"start": 276,
"end": 289,
"text": "Figure 3(b-c)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Translation Forest",
"sec_num": "3.2"
},
{
"text": "More formally, we define a function match(r, v) which attempts to pattern-match rule r at node v in the parse forest, and in case of success, returns a list of descendent nodes of v that are matched to the variables in r, or returns an empty list if the match fails. Note that this procedure is recursive and may which covers three parse hyperedges, while nodes in gray do not pattern-match any rule (although they are involved in the matching of other nodes, where they match interior nodes of the source-side tree fragments in a rule). We can thus construct a translation hyperedge from match(r, v) to v for each node v and rule r. In addition, we also need to keep track of the target string s(r) specified by rule r, which includes target-language terminals and variables. For example, s(r 3 ) = \"held x 2 with x 1 \". The subtranslations of the matched variable nodes will be substituted for the variables in s(r) to get a complete translation for node v. So a translation hyperedge e is a triple tails(e), head (e), s where s is the target string from the rule, for example, e 3 = (NPB 2,3 , NPB 5,6 ), VP 1,6 , \"held x 2 with x 1 \" .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Forest",
"sec_num": "3.2"
},
{
"text": "This procedure is summarized in Pseudocode 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Forest",
"sec_num": "3.2"
},
{
"text": "The decoder performs two tasks on the translation forest: 1-best search with integrated language model (LM), and k-best search with LM to be used in minimum error rate training. Both tasks can be done efficiently by forest-based algorithms based on k-best parsing (Huang and Chiang, 2005) . For 1-best search, we use the cube pruning technique (Chiang, 2007; Huang and Chiang, 2007) which approximately intersects the translation forest with the LM. Basically, cube pruning works bottom up in a forest, keeping at most k +LM items at each node, and uses the best-first expansion idea from the Algorithm 2 of Huang and Chiang (2005) to speed up the computation. An +LM item of node v has the form (v a\u22c6b ), where a and b are the target-language boundary words. For example, (VP held \u22c6 Sharon",
"cite_spans": [
{
"start": 264,
"end": 288,
"text": "(Huang and Chiang, 2005)",
"ref_id": "BIBREF8"
},
{
"start": 344,
"end": 358,
"text": "(Chiang, 2007;",
"ref_id": "BIBREF3"
},
{
"start": 359,
"end": 382,
"text": "Huang and Chiang, 2007)",
"ref_id": "BIBREF9"
},
{
"start": 608,
"end": 631,
"text": "Huang and Chiang (2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding Algorithms",
"sec_num": "3.3"
},
{
"text": ") is an +LM item with its translation starting with \"held\" and ending with \"Sharon\". This scheme can be easily extended to work with a general n-gram by storing n \u2212 1 words at both ends (Chiang, 2007) .",
"cite_spans": [
{
"start": 186,
"end": 200,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "1,6",
"sec_num": null
},
{
"text": "For k-best search after getting 1-best derivation, we use the lazy Algorithm 3 of Huang and Chiang (2005) that works backwards from the root node, incrementally computing the second, third, through the kth best alternatives. However, this time we work on a finer-grained forest, called translation+LM forest, resulting from the intersection of the translation forest and the LM, with its nodes being the +LM items during cube pruning. Although this new forest is prohibitively large, Algorithm 3 is very efficient with minimal overhead on top of 1-best.",
"cite_spans": [
{
"start": 82,
"end": 105,
"text": "Huang and Chiang (2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "1,6",
"sec_num": null
},
{
"text": "We use the pruning algorithm of (Jonathan Graehl, p.c.; Huang, 2008) that is very similar to the method based on marginal probability (Charniak and Johnson, 2005) , except that it prunes hyperedges as well as nodes. Basically, we use an Inside-Outside algorithm to compute the Viterbi inside cost \u03b2(v) and the Viterbi outside cost \u03b1(v) for each node v, and then compute the merit \u03b1\u03b2(e) for each hyperedge:",
"cite_spans": [
{
"start": 56,
"end": 68,
"text": "Huang, 2008)",
"ref_id": "BIBREF11"
},
{
"start": 134,
"end": 162,
"text": "(Charniak and Johnson, 2005)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Forest Pruning Algorithm",
"sec_num": "3.4"
},
{
"text": "\u03b1\u03b2(e) = \u03b1(head (e)) +",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forest Pruning Algorithm",
"sec_num": "3.4"
},
{
"text": "u i \u2208tails(e) \u03b2(u i ) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forest Pruning Algorithm",
"sec_num": "3.4"
},
{
"text": "Intuitively, this merit is the cost of the best derivation that traverses e, and the difference \u03b4(e) = \u03b1\u03b2(e) \u2212 \u03b2(TOP) can be seen as the distance away from the globally best derivation. We prune away a hyperedge e if \u03b4(e) > p for a threshold p. Nodes with all incoming hyperedges pruned are also pruned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forest Pruning Algorithm",
"sec_num": "3.4"
},
{
"text": "We can extend the simple model in Equation 1 to a log-linear one (Liu et al., 2006; :",
"cite_spans": [
{
"start": 65,
"end": 83,
"text": "(Liu et al., 2006;",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "d * = arg max d\u2208D P(d | T ) \u03bb 0 \u2022 e \u03bb 1 |d| \u2022 P lm (s) \u03bb 2 \u2022 e \u03bb 3 |s| (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "where T is the 1-best parse, e \u03bb 1 |d| is the penalty term on the number of rules in a derivation, P lm (s) is the language model and e \u03bb 3 |s| is the length penalty term on target translation. The derivation probability conditioned on 1-best tree, P(d | T ), should now be replaced by P(d | H p ) where H p is the parse forest, which decomposes into the product of probabilities of translation rules r \u2208 d:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P(d | H p ) = r\u2208d P(r)",
"eq_num": "(6)"
}
],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "where each P(r) is the product of five probabilities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P(r) = P(t | s) \u03bb 4 \u2022 P lex (t | s) \u03bb 5 \u2022 P(s | t) \u03bb 6 \u2022 P lex (s | t) \u03bb 7 \u2022 P(t | H p ) \u03bb 8 .",
"eq_num": "(7)"
}
],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Here t and s are the source-side tree and targetside string of rule r, respectively, P(t | s) and P(s | t) are the two translation probabilities, and P lex (\u2022) are the lexical probabilities. The only extra term in forest-based decoding is P(t | H p ) denoting the source side parsing probability of the current translation rule r in the parse forest, which is the product of probabilities of each parse hyperedge e p covered in the pattern-match of t against H p (which can be recorded at conversion time):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "P(t | H p ) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "ep \u2208Hp, e p covered by t P(e p ). (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Our experiments are on Chinese-to-English translation, and we use the Chinese parser of Xiong et al. (2005) to parse the source side of the bitext. Following Huang (2008) , we modify the parser to output a packed forest for each sentence. Our training corpus consists of 31,011 sentence pairs with 0.8M Chinese words and 0.9M English words. We first word-align them by GIZA++ refined by \"diagand\" from Koehn et al. (2003) , and apply the tree-to-string rule extraction algorithm (Galley et al., 2006; Liu et al., 2006) , which resulted in 346K translation rules. Note that our rule extraction is still done on 1-best parses, while decoding is on k-best parses or packed forests. We also use the SRI Language Modeling Toolkit (Stolcke, 2002) to train a trigram language model with Kneser-Ney smoothing on the English side of the bitext.",
"cite_spans": [
{
"start": 88,
"end": 107,
"text": "Xiong et al. (2005)",
"ref_id": "BIBREF24"
},
{
"start": 158,
"end": 170,
"text": "Huang (2008)",
"ref_id": "BIBREF11"
},
{
"start": 402,
"end": 421,
"text": "Koehn et al. (2003)",
"ref_id": "BIBREF13"
},
{
"start": 479,
"end": 500,
"text": "(Galley et al., 2006;",
"ref_id": "BIBREF7"
},
{
"start": 501,
"end": 518,
"text": "Liu et al., 2006)",
"ref_id": "BIBREF16"
},
{
"start": 725,
"end": 740,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation",
"sec_num": "4.1"
},
{
"text": "We NIST MT Evaluation test set as our test set (1082 sentences), with on average 28.28 and 26.31 words per sentence, respectively. We evaluate the translation quality using the case-sensitive BLEU-4 metric (Papineni et al., 2002) . We use the standard minimum error-rate training (Och, 2003) to tune the feature weights to maximize the system's BLEU score on the dev set. On dev and test sets, we prune the Chinese parse forests by the forest pruning algorithm in Section 3.4 with a threshold of p = 12, and then convert them into translation forests using the algorithm in Section 3.2. To increase the coverage of the rule set, we also introduce a default translation hyperedge for each parse hyperedge by monotonically translating each tail node, so that we can always at least get a complete translation in the end.",
"cite_spans": [
{
"start": 206,
"end": 229,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF19"
},
{
"start": 280,
"end": 291,
"text": "(Och, 2003)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data preparation",
"sec_num": "4.1"
},
{
"text": "The BLEU score of the baseline 1-best decoding is 0.2325, which is consistent with the result of 0.2302 in (Liu et al., 2007) on the same training, development and test sets, and with the same rule extraction procedure. The corresponding BLEU score of Pharaoh (Koehn, 2004) is 0.2182 on this dataset. Figure 4 compares forest decoding with decoding on k-best trees in terms of speed and quality. Using more than one parse tree apparently improves the BLEU score, but at the cost of much slower decoding, since each of the top-k trees has to be decoded individually although they share many common subtrees. Forest decoding, by contrast, is much faster and produces consistently better BLEU scores. With pruning threshold p = 12, it achieved a BLEU score of 0.2485, which is an absolute improvement of 1.6% points over the 1-best baseline, and is statistically significant using the sign-test of Collins et al. (2005) ",
"cite_spans": [
{
"start": 107,
"end": 125,
"text": "(Liu et al., 2007)",
"ref_id": "BIBREF17"
},
{
"start": 260,
"end": 273,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF14"
},
{
"start": 895,
"end": 916,
"text": "Collins et al. (2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 301,
"end": 309,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "(p < 0.01).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "We also investigate the question of how often the ith-best parse tree is picked to direct the translation (i = 1, 2, . . .), in both k-best and forest decoding schemes. A packed forest can be roughly viewed as a (virtual) \u221e-best list, and we can thus ask how often is a parse beyond top-k used by a forest, which relates to the fundamental limitation of k-best lists. Figure 5 shows that, the 1-best parse is still preferred 25% of the time among 30-best trees, and 23% of the time by the forest decoder. These ratios decrease dramatically as i increases, but the forest curve has a much longer tail in large i. Indeed, 40% of the trees preferred by a forest is beyond top-30, 32% is beyond top-100, and even 20% beyond top-1000. This confirms the fact that we need exponentially large kbest lists with the explosion of alternatives, whereas a forest can encode these information compactly.",
"cite_spans": [],
"ref_spans": [
{
"start": 368,
"end": 376,
"text": "Figure 5",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "We also conduct experiments on a larger dataset, which contains 2.2M training sentence pairs. Besides the trigram language model trained on the English side of these bitext, we also use another trigram model trained on the first 1/3 of the Xinhua portion of Gigaword corpus. The two LMs have dis-approach \\ ruleset TR TR+BP 1-best tree 0.2666 0.2939 30-best trees 0.2755 0.3084 forest (p = 12) 0.2839 0.3149 tinct weights tuned by minimum error rate training. The dev and test sets remain the same as above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling to large data",
"sec_num": "4.3"
},
{
"text": "Furthermore, we also make use of bilingual phrases to improve the coverage of the ruleset. Following Liu et al. (2006) , we prepare a phrase-table from a phrase-extractor, e.g. Pharaoh, and at decoding time, for each node, we construct on-the-fly flat translation rules from phrases that match the sourceside span of the node. These phrases are called syntactic phrases which are consistent with syntactic constituents (Chiang, 2005) , and have been shown to be helpful in tree-based systems (Galley et al., 2006; Liu et al., 2006) .",
"cite_spans": [
{
"start": 101,
"end": 118,
"text": "Liu et al. (2006)",
"ref_id": "BIBREF16"
},
{
"start": 419,
"end": 433,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF2"
},
{
"start": 492,
"end": 513,
"text": "(Galley et al., 2006;",
"ref_id": "BIBREF7"
},
{
"start": 514,
"end": 531,
"text": "Liu et al., 2006)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling to large data",
"sec_num": "4.3"
},
{
"text": "The final results are shown in Table 1 , where TR denotes translation rule only, and TR+BP denotes the inclusion of bilingual phrases. The BLEU score of forest decoder with TR is 0.2839, which is a 1.7% points improvement over the 1-best baseline, and this difference is statistically significant (p < 0.01). Using bilingual phrases further improves the BLEU score by 3.1% points, which is 2.1% points higher than the respective 1-best baseline. We suspect this larger improvement is due to the alternative constituents in the forest, which activates many syntactic phrases suppressed by the 1-best parse.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 38,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Scaling to large data",
"sec_num": "4.3"
},
{
"text": "We have presented a novel forest-based translation approach which uses a packed forest rather than the 1-best parse tree (or k-best parse trees) to direct the translation. Forest provides a compact data-structure for efficient handling of exponentially many tree structures, and is shown to be a promising direction with state-of-the-art translation results and reasonable decoding speed. This work can thus be viewed as a compromise between string-based and tree-based paradigms, with a good trade-off between speed and accuarcy. For future work, we would like to use packed forests not only in decoding, but also for translation rule extraction during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and future work",
"sec_num": "5"
},
{
"text": "There has been some confusion in the MT literature regarding the term forest: the word \"forest\" in \"forest-to-string rules\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Part of this work was done while L. H. was visiting CAS/ICT. The authors were supported by National Natural Science Foundation of China, Contracts 60736014 and 60573188, and 863 State Key Project No. 2006AA010108 (H. M and Q. L.), and by NSF ITR EIA-0205456 (L. H.). We would also like to thank Chris Quirk for inspirations, Yang Liu for help with rule extraction, Mark Johnson for posing the question of virtual \u221e-best list, and the anonymous reviewers for suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The structure of shared forests in ambiguous parsing",
"authors": [
{
"first": "Sylvie",
"middle": [],
"last": "Billot",
"suffix": ""
},
{
"first": "Bernard",
"middle": [],
"last": "Lang",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of ACL '89",
"volume": "",
"issue": "",
"pages": "143--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sylvie Billot and Bernard Lang. 1989. The structure of shared forests in ambiguous parsing. In Proceedings of ACL '89, pages 143-151.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Coarse-tofine-grained n-best parsing and discriminative reranking",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to- fine-grained n-best parsing and discriminative rerank- ing. In Proceedings of the 43rd ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A hierarchical phrase-based model for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "263--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL, pages 263-270, Ann Arbor, Michigan, June.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Comput. Linguist",
"volume": "33",
"issue": "2",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2007. Hierarchical phrase-based transla- tion. Comput. Linguist., 33(2):201-228.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Clause restructuring for statistical machine translation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Ivona",
"middle": [],
"last": "Kucerova",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "531--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins, Philipp Koehn, and Ivona Kucerova. 2005. Clause restructuring for statistical machine translation. In Proceedings of ACL, pages 531-540, Ann Arbor, Michigan, June.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Machine translation using probabilistic synchronous dependency insertion grammars",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "541--548",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan Ding and Martha Palmer. 2005. Machine trans- lation using probabilistic synchronous dependency in- sertion grammars. In Proceedings of ACL, pages 541- 548, Ann Arbor, Michigan, June.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "What's in a translation rule",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2004,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "273--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a translation rule? In HLT- NAACL, pages 273-280, Boston, MA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Scalable inference and training of context-rich syntactic translation models",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Graehl",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Deneefe",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Thayer",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of COLING-ACL",
"volume": "",
"issue": "",
"pages": "961--968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceed- ings of COLING-ACL, pages 961-968, Sydney, Aus- tralia, July.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Better k-best parsing",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Ninth International Workshop on Parsing Technologies (IWPT-2005)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and David Chiang. 2005. Better k-best parsing. In Proceedings of Ninth International Work- shop on Parsing Technologies (IWPT-2005), Vancou- ver, Canada.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Forest rescoring: Faster decoding with integrated language models",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "144--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proceedings of ACL, pages 144-151, Prague, Czech Republic, June.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Statistical syntax-directed translation with extended domain of locality",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proceedings of AMTA, Boston, MA, August.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Forest reranking: Discriminative parsing with non-local features",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL, Columbus, OH.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Parsing and Hypergraphs",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Seventh International Workshop on Parsing Technologies (IWPT-2001)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2001. Parsing and Hypergraphs. In Proceedings of the Seventh In- ternational Workshop on Parsing Technologies (IWPT- 2001), 17-19 October 2001, Beijing, China.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Joseph"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz Joseph Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceed- ings of HLT-NAACL, Edmonton, AB, Canada.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Pharaoh: a beam search decoder for phrase-based statistical machine translation models",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of AMTA",
"volume": "",
"issue": "",
"pages": "115--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Pharaoh: a beam search decoder for phrase-based statistical machine translation mod- els. In Proceedings of AMTA, pages 115-124.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A path-based transfer model for machine translation",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. 2004. A path-based transfer model for ma- chine translation. In Proceedings of the 20th COLING, Barcelona, Spain.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Tree-tostring alignment template for statistical machine translation",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of COLING-ACL",
"volume": "",
"issue": "",
"pages": "609--616",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-to- string alignment template for statistical machine trans- lation. In Proceedings of COLING-ACL, pages 609- 616, Sydney, Australia, July.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Forest-to-string statistical translation rules",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yun",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "704--711",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Yun Huang, Qun Liu, and Shouxun Lin. 2007. Forest-to-string statistical translation rules. In Pro- ceedings of ACL, pages 704-711, Prague, Czech Re- public, June.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och. 2003. Minimum error rate training in sta- tistical machine translation. In Proceedings of ACL, pages 160-167.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of ACL, pages 311-318, Philadephia, USA, July.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The impact of parse quality on syntactically-informed statistical machine translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Corston-Oliver",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Quirk and Simon Corston-Oliver. 2006. The im- pact of parse quality on syntactically-informed statis- tical machine translation. In Proceedings of EMNLP.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Dependency treelet translation: Syntactically informed phrasal SMT",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Arul",
"middle": [],
"last": "Menezes",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "271--279",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency treelet translation: Syntactically informed phrasal SMT. In Proceedings of ACL, pages 271-279, Ann Arbor, Michigan, June.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "SRILM -an extensible language modeling toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ICSLP",
"volume": "30",
"issue": "",
"pages": "901--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM -an extensible lan- guage modeling toolkit. In Proceedings of ICSLP, vol- ume 30, pages 901-904.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-404.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Parsing the Penn Chinese Treebank with semantic knowledge",
"authors": [
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Shuanglong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of IJCNLP 2005",
"volume": "",
"issue": "",
"pages": "70--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deyi Xiong, Shuanglong Li, Qun Liu, and Shouxun Lin. 2005. Parsing the Penn Chinese Treebank with seman- tic knowledge. In Proceedings of IJCNLP 2005, pages 70-81, Jeju Island, South Korea.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Synchronous binarization for machine translation",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhang, Liang Huang, Daniel Gildea, and Kevin Knight. 2006. Synchronous binarization for ma- chine translation. In Proceedings of HLT-NAACL, New York, NY.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "An example translation rule (r 3 inFig. 2).",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "(a) B\u00f9sh\u00ed [y\u01d4 Sh\u0101l\u00f3ng ] 1 [j\u01d4x\u00edng le hu\u00ect\u00e1n ] 2 Bush [held a talk] 2 [with Sharon] 1",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "An example derivation of tree-to-string translation. Shaded regions denote parts of the tree that is pattern-matched with the rule being applied.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "use the 2002 NIST MT Evaluation test set as our development set (878 sentences) and the 2005",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF4": {
"text": "Comparison of decoding on forests with decoding on k-best trees.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF6": {
"text": "Percentage of the i-th best parse tree being picked in decoding. 32% of the distribution for forest decoding is beyond top-100 and is not shown on this plot.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>4:</td><td colspan=\"2\">for each translation rule r \u2208 R do</td></tr><tr><td>5:</td><td>vars \u2190 match(r, v)</td><td>\u22b2 variables</td></tr><tr><td>6:</td><td>if vars is not empty then</td><td/></tr><tr><td>7:</td><td>e \u2190 vars, v, s(r)</td><td/></tr><tr><td>8:</td><td colspan=\"2\">add translation hyperedge e to H t</td></tr><tr><td colspan=\"3\">involve multiple parse hyperedges. For example,</td></tr><tr><td/><td colspan=\"2\">match(r 3 , VP 1,6 ) = (NPB 2,3 , NPB 5,6 ),</td></tr></table>",
"text": "Pseudocode The conversion algorithm. Input: parse forest H p and rule set R 2: Output: translation forest H t 3: for each node v \u2208 V p in top-down order do",
"num": null,
"html": null
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"text": "BLEU score results from training on large data.",
"num": null,
"html": null
}
}
}
}