ACL-OCL / Base_JSON /prefixD /json /D14 /D14-1029.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D14-1029",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:54:53.495404Z"
},
"title": "Reordering Model for Forest-to-String Machine Translation",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present a novel extension of a forest-to-string machine translation system with a reordering model. We predict reordering probabilities for every pair of source words with a model using features observed from the input parse forest. Our approach naturally deals with the ambiguity present in the input parse forest, but, at the same time, takes into account only the parts of the input forest used by the current translation hypothesis. The method provides improvement from 0.6 up to 1.0 point measured by (Ter \u2212 Bleu)/2 metric.",
"pdf_parse": {
"paper_id": "D14-1029",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present a novel extension of a forest-to-string machine translation system with a reordering model. We predict reordering probabilities for every pair of source words with a model using features observed from the input parse forest. Our approach naturally deals with the ambiguity present in the input parse forest, but, at the same time, takes into account only the parts of the input forest used by the current translation hypothesis. The method provides improvement from 0.6 up to 1.0 point measured by (Ter \u2212 Bleu)/2 metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Various commonly adopted statistical machine translation (SMT) approaches differ in the amount of linguistic knowledge present in the rules they employ.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Phrase-based (Koehn et al., 2003) models are strong in lexical coverage in local contexts, and use external models to score reordering options (Tillman, 2004; Koehn et al., 2005) .",
"cite_spans": [
{
"start": 13,
"end": 33,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF10"
},
{
"start": 143,
"end": 158,
"text": "(Tillman, 2004;",
"ref_id": "BIBREF19"
},
{
"start": 159,
"end": 178,
"text": "Koehn et al., 2005)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Hierarchical models (Chiang, 2005) use lexicalized synchronous context-free grammar rules to produce local reorderings. The grammaticality of their output can be improved by additional reordering models scoring permutations of the source words. Reordering model can be either used for source pre-ordering (Tromble and Eisner, ) , integrated into decoding via translation rules extension (Hayashi et al., 2010) , additional lexical features (He et al., ) , or using external sources of information, such as source syntactic features observed from a parse tree (Huang et al., 2013) .",
"cite_spans": [
{
"start": 20,
"end": 34,
"text": "(Chiang, 2005)",
"ref_id": "BIBREF0"
},
{
"start": 305,
"end": 327,
"text": "(Tromble and Eisner, )",
"ref_id": null
},
{
"start": 387,
"end": 409,
"text": "(Hayashi et al., 2010)",
"ref_id": "BIBREF5"
},
{
"start": 440,
"end": 453,
"text": "(He et al., )",
"ref_id": null
},
{
"start": 559,
"end": 579,
"text": "(Huang et al., 2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Tree-to-string (T2S) models (Liu et al., 2006; Galley et al., 2006) use rules with syntactic structures, aiming at even more grammatically appropriate reorderings.",
"cite_spans": [
{
"start": 28,
"end": 46,
"text": "(Liu et al., 2006;",
"ref_id": "BIBREF12"
},
{
"start": 47,
"end": 67,
"text": "Galley et al., 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Forest-to-string (F2S) systems use source syntactic forest as the input to overcome parsing errors, and to alleviate sparseness of translation rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The parse forest may often represent several meanings for an ambiguous input that may need to be transtated differently using different word orderings. The following example of an ambiguous Chinese sentence with ambiguous part-of-speech labeling motivates our interest in the reordering model for the F2S translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "S. t\u01ceol\u00f9n (0) SSS. h\u00f9i (1) SSS z\u011bnmey\u00e0ng (2) discussion/NN SS meeting/NN how/VV discuss/VV SSSSSwill/VV There are several possible meanings based on the different POS tagging sequences. We present translations for two of them, together with the indices to their original source words: (a) NN NN VV:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "How 2 was 2 the 0 discussion 0 meeting 1 ?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(b) VV VV VV: Discuss 0 what 2 will 1 happen 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A T2S system starts from a single parse corresponding to one of the possible POS sequences, the same tree can be used to predict word reorderings. On the other hand, a F2S system deals with the ambiguity through exploring translation hypotheses for all competing parses representing the different meanings. As our example suggests, different meanings also tend to reorder differently id rule",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "r 1 NP(t\u01ceol\u00f9n/NN) \u2192 discussion r 2 NP(h\u00f9i/NN) \u2192 meeting r 3 NP(x 1 :NP x 2 :NP) \u2192 the x 1 x 2 r 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "IP(x 1 :NP z\u011bnmey\u00e0ng/VV) \u2192 how was x 1 r 5 IP(h\u00f9i/VV z\u011bnmey\u00e0ng/VV) \u2192 what will happen r 6 IP(t\u01ceol\u00f9n/VV x 1 :IP) \u2192 discuss x 1 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Forest-to-string translation is an extension of the tree-to-string model (Liu et al., 2006; Huang et al., 2006) allowing it to use a packed parse forest as the input instead of a single parse tree. Figure 1 shows a tree-to-string translation rule (Huang et al., 2006) , which is a tuple lhs(r), rhs(r), \u03c8(r) , where lhs(r) is the sourceside tree fragment, whose internal nodes are labeled by nonterminal symbols (like NP), and whose frontier nodes are labeled by sourcelanguage words (like \"z\u011bnmey\u00e0ng\") or variables from a finite set X = {x 1 , x 2 , . . .}; rhs(r) is the target-side string expressed in target-language words (like \"how was\") and variables; and \u03c8(r) is a mapping from X to nonterminals. Each variable x i \u2208 X occurs exactly once in lhs(r) and exactly once in rhs(r).",
"cite_spans": [
{
"start": 73,
"end": 91,
"text": "(Liu et al., 2006;",
"ref_id": "BIBREF12"
},
{
"start": 92,
"end": 111,
"text": "Huang et al., 2006)",
"ref_id": "BIBREF7"
},
{
"start": 247,
"end": 267,
"text": "(Huang et al., 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 198,
"end": 206,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Translation Models",
"sec_num": "2"
},
{
"text": "The Table 1 lists all rules necessary to derive translations (a) and (b), with their internal structure removed for simplicity.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 11,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Translation Models",
"sec_num": "2"
},
{
"text": "Typically, an F2S system translates in two steps (shown in Figure 2 ): parsing and decoding. In the parsing step, the source language input is converted into a parse forest (A). In the decoding step, we first convert the parse forest into a translation forest F t in (B) by using the fast pattern-matching technique (Zhang et al., 2009) . Then the decoder uses dynamic programing with beam search and cube pruning to find the approximation to the best scoring derivation in the translation forest, and outputs the target string.",
"cite_spans": [
{
"start": 316,
"end": 336,
"text": "(Zhang et al., 2009)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 59,
"end": 67,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Translation Models",
"sec_num": "2"
},
{
"text": "In this section, we describe the process of applying the reordering model scores. We score pairwise translation reorderings for every pair of source words similarly as described by Huang et al. (2013) . In their approach, an external model of ordering distributions of sibling constituent pairs predicts the reordering of word pairs. Our approach deals with parse forests rather than with single trees, thus we have to model the scores differently. We model ordering distributions for every pair of close relatives-nodes in the parse forest that may occur together as frontier nodes of a single matching rule. We further condition the distribution on a third node-a common ancestor of the node pair that corresponds to the root node of the matching rule. This way our external model takes into acount the syntactic context of the hypothesis. For example, nodes NP 0, 1 and NP 1, 2 are close relatives, NP 0, 2 and IP 0, 3 are their common ancestors; NP 0, 1 and VV 2, 3 are close relatives, IP 0, 3 is their common ancestor; NP 0, 1 and VV 1, 2 are not close relatives. More formally, let us have an input sentence (w 0 , ..., w n ) and its translation hypothesis h. For every i and j such that 0 \u2264 i < j \u2264 n we assume that the translations of w i and w j are in the hypothesis h either in the same or inverted ordering o i j \u2208 {Inorder, Reorder}, with a probability P order (o i j |h). Conditioning on h signifies that the probabilistic model takes the current hypothesis as a parameter. The reordering score of the entire hy- The parse forest of the example sentence. Solid hyperedges denote the best parse, dashed hyperedges denote the second best parse. Unary edges were collapsed. (B) The corresponding translation forest F t after applying the tree-to-string translation rule set R t . Each translation hyperedge (e.g. e 4 ) has the same index as the corresponding rule (r 4 ). The forest-tostring system can produce the example translation (a) (solid derivation: r 1 , r 2 , r 3 , and r 4 ) and (b) (dashed derivation: r 5 , r 6 ).",
"cite_spans": [
{
"start": 181,
"end": 200,
"text": "Huang et al. (2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Forest Reordering Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(A) IP 0, 3 NP 0, 2 NP 0, 1 t\u01ceol\u00f9n VV 0, 1 NP 1, 2 h\u00f9i VV 1, 2 IP 1, 3 z\u011bnmey\u00e0ng VV 2, 3 R t \u21d2 (B)",
"eq_num": "e"
}
],
"section": "Forest Reordering Model",
"sec_num": "3"
},
{
"text": "pothesis f order (h) is then computed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forest Reordering Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f order = 0\u2264i< j\u2264n \u2212 log P order (o i j = o h i j | h),",
"eq_num": "(1)"
}
],
"section": "Forest Reordering Model",
"sec_num": "3"
},
{
"text": "where o h i j denotes the actual ordering used in h. The score f order can be computed recursively by dynamic programing during the decoding. As an example, we show in Table 2 reordering probabilities retrieved in decoding of our sample sentence. (a) If h is a hypothesis formed by a single translation rule r with no frontier nonterminals, we evaluate all word pairs w i and w j covered by h such that i < j. For each such pair we find the frontier nodes x and y matched by r such that x spans exactly w i and y spans exactly w j . (In this case, x and y match preterminal nodes, each spanning one position). We also find the node z matching the root of r. Then we directly use the Equation 1 to compute the score using an external model P order (o i j |xyz) to estimate the probability of reordering the relative nodes. For example, when applying rule r 5 , we use the ordering distribution P order (o 1,2 |VV 1, 2 , VV 2, 3 , IP 1, 3 ) to score reorderings of h\u00f9i and z\u011bnmey\u00e0ng.",
"cite_spans": [],
"ref_spans": [
{
"start": 168,
"end": 175,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Forest Reordering Model",
"sec_num": "3"
},
{
"text": "(b) If h is a hypothesis formed by a T2S rule with one or more frontier nonterminals, we evaluate all word pairs as follows: If both w i and w j are spanned by the same frontier nonterminal (e.g., t\u01ceol\u00f9n and h\u00f9i when applying the rule r 4 ), the score f order had been already computed for the underlying subhypothesis, and therefore was already included in the total score. Otherwise, we compute the word pair ordering cost. We find the close relatives x and y representing each w i and w j . If w i is matched by a terminal in r, we select x as the node matching r and spanning exactly w i . If w i is spanned by a frontier nonterminal in r (meaning that it was translated in a subhypothesis), we select x as the node matching that nonterminal. We proceed identically for w j and y. For example, when applying the rule r 4 , the word z\u011bnmey\u00e0ng will be represented by the node VV 2, 3 , while t\u01ceol\u00f9n and h\u00f9i will be represented by the node NP 0, 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forest Reordering Model",
"sec_num": "3"
},
{
"text": "Note that the ordering o h i j cannot be determined in some cases, sometimes a source word does not produce any translation, or the translation of one word is entirely surrounded by the translations of another word. A weight corresponding to the binary discount feature f o unknown is added to the score for each such case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forest Reordering Model",
"sec_num": "3"
},
{
"text": "The external model P order (o i j |xyz) is implemented as a maximum entropy model. Features of the model are observed from paths connecting node z with nodes x and y as follows: First, we pick paths z \u2192 x and z \u2192 y. Let z \u2032 be the last node shared by both paths (the closest common ancestor of x and y). Then we distinguish three types of path: (1) The common prefix z \u2192 z \u2032 (it may have zero length), the left path z \u2192 x, and the right path z \u2192 y. We observe the following features on each path: the syntactic labels of the nodes, the production rules, the spans of nodes, a list of stop words immediately preceding and following the span of the node. We merge the features observed from different paths z \u2192 x and z \u2192 y. This approach rule word pair order probability a) how 2 was 2 the discussion 0 meeting 1 r 3 (t\u01ceol\u00f9n,h\u00f9i) Inorder P order o 0,1 |NP 0, 1 , NP 1, 2 , NP 0, 2 r 4 (t\u01ceol\u00f9n,z\u011bnmey\u00e0ng) Reorder P order o 0,2 |NP 0, 2 , VV 2, 3 , IP 0, 3 (h\u00f9i,z\u011bnmey\u00e0ng) Reorder P order o 1,2 |NP 0, 2 , VV 2, 3 , IP 0, 3 b) discuss 0 what 2 will 1 happen 1 r 5 (h\u00f9i, z\u011bnmey\u00e0ng) Reorder P order o 1,2 |VV 1, 2 , VV 2, 3 , IP 1, 3 r 6 (t\u01ceol\u00f9n, h\u00f9i) Inorder P order o 0,1 |VV 0, 1 , IP 1, 3 , IP 0, 3 (t\u01ceol\u00f9n, z\u011bnmey\u00e0ng Inorder ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forest Reordering Model",
"sec_num": "3"
},
{
"text": "P order o 0,2 |VV 0, 1 , IP 1, 3 , IP 0, 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forest Reordering Model",
"sec_num": "3"
},
{
"text": "In this section we describe the setup of the experiment, and present results. Finally, we propose future directions of research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "Our baseline is a strong F2S system (\u010cmejrek et al., 2013) built on large data with the full set of model features including rule translation probabilities, general lexical and provenance translation probabilities, language model, and a variety of sparse features. We build it as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "The training corpus consists of 16 million sentence pairs available within the DARPA BOLT Chinese-English task. The corpus includes a mix of newswire, broadcast news, webblog data coming from various sources such as LDC, HK Law, HK Hansard and UN data. The Chinese text is segmented with a segmenter trained on CTB data using conditional random fields (CRF). Bilingual word alignments are trained and combined from two sources: GIZA (Och, 2003) and maximum entropy word aligner (Ittycheriah and Roukos, 2005) .",
"cite_spans": [
{
"start": 433,
"end": 444,
"text": "(Och, 2003)",
"ref_id": "BIBREF15"
},
{
"start": 478,
"end": 508,
"text": "(Ittycheriah and Roukos, 2005)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "Language models are trained on the English side of the parallel corpus, and on monolingual corpora, such as Gigaword (LDC2011T07) and Google News, altogether comprising around 10 billion words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "We parse the Chinese part of the training data with a modified version of the Berkeley parser (Petrov and Klein, 2007) , then prune the obtained parse forests for each training sentence with the marginal probability-based inside-outside algorithm to contain only 3n CFG nodes, where n is the sentence length.",
"cite_spans": [
{
"start": 94,
"end": 118,
"text": "(Petrov and Klein, 2007)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "We extract tree-to-string translation rules from forest-string sentence pairs using the forest-based GHKM algorithm Galley et al., 2004) .",
"cite_spans": [
{
"start": 116,
"end": 136,
"text": "Galley et al., 2004)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "In the decoding step, we use larger input parse forests than in training, we prune them to contain 10n nodes. Then we use fast patternmatching (Zhang et al., 2009) to convert the parse forest into the translation forest.",
"cite_spans": [
{
"start": 143,
"end": 163,
"text": "(Zhang et al., 2009)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "The proposed reordering model is trained on 100, 000 automatically aligned forest-string sentence pairs from the parallel training data. These sentences provide 110M reordering events that are used by megam (Daum\u00e9 III, 2004) to train the maximum entropy model. The current implementation of the reordering model requires offline preprocessing of the input hypergraphs to precompute reordering probabilities for applicable triples of nodes (x, y, z). Since the number of levels in the syntactic trees in T2S rules is limited to 4, we only need to consider such triples, where z is up to 4 levels above x or y.",
"cite_spans": [
{
"start": 207,
"end": 224,
"text": "(Daum\u00e9 III, 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "We tune on 1275 sentences, each with 4 references, from the LDC2010E30 corpus, initially released under the DARPA GALE program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "We combine two evaluation metrics for tuning and testing: Bleu (Papineni et al., 2002) and Ter (Snover et al., 2006) . Both the baseline and the reordering experiments are optimized with MIRA (Crammer et al., 2006) portion (691 sentences, 4 references), and NIST MT08 Web portion (666 sentences, 4 references). Table 3 shows all results of the baseline and the system extended with the forest reordering model. The (Ter \u2212 Bleu)/2 score of the baseline system is 12.0 on MT08 Newswire, showing that it is a strong baseline. The system with the proposed reordering model significantly improves the baseline by 0.6, 0.8, and 1.0 (Ter \u2212 Bleu)/2 points on GALE Web, MT08 Newswire, and MT08 Web. The current approach relies on frontier node annotations, ignoring to some extent the internal structure of the T2S rules. As part of future research, we would like to compare this approach with the one that takes into accout the internal structure as well.",
"cite_spans": [
{
"start": 63,
"end": 86,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF16"
},
{
"start": 95,
"end": 116,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF18"
},
{
"start": 192,
"end": 214,
"text": "(Crammer et al., 2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 311,
"end": 318,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "We have presented a novel reordering model for the forest-to-string MT system. The model deals with the ambiguity of the input forests, but also predicts specifically to the current parse followed by the translation hypothesis. The reordering probabilities can be precomputed by an offline process, allowing for efficient scoring in runtime. The method provides improvement from 0.6 up to 1.0 point measured by (Ter \u2212 Bleu)/2 metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Only to some extent, the rule still has to match the input forest, but the reordering model decides based on the sum of paths observed between the root and frontier nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Ji\u0159\u00ed Havelka for proofreading and helpful suggestions. We would like to acknowledge the support of DARPA under Grant HR0011-12-C-0015 for funding part of this work. The views, opinions, and/or findings contained in this article are those of the author and should not be interpreted as representing the official views or policies, either expressed or implied, of the DARPA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A hierarchical phrase-based model for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Pro- ceedings of the ACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Online passive-aggressive algorithms",
"authors": [
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Dekel",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Keshet",
"suffix": ""
},
{
"first": "Shai",
"middle": [],
"last": "Shalev-Shwartz",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Machine Learning Research",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive-aggressive algorithms. Journal of Machine Learning Research, 7.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Notes on CG and LM-BFGS optimization of logistic regression",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III. 2004. Notes on CG and LM-BFGS op- timization of logistic regression. Paper available at http://pub.hal3.name#daume04cg-bfgs, im- plementation available at http://hal3.name/ megam/.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "What's in a translation rule",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a translation rule? In Proceedings of the HLT-NAACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Scalable inference and training of context-rich syntactic translation models",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Graehl",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Deneefe",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Thayer",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING-ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Pro- ceedings of the COLING-ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Hierarchical Phrase-based Machine Translation with Word-based Reordering Model",
"authors": [
{
"first": "Katsuhiko",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Seiichi",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katsuhiko Hayashi, Hajime Tsukada, Katsuhito Sudoh, Kevin Duh, and Seiichi Yamamoto. 2010. Hi- erarchical Phrase-based Machine Translation with Word-based Reordering Model. In Proceedings of the COLING.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Maximum entropy based phrase reordering for hierarchical phrase-based translation",
"authors": [
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongjun He, Yao Meng, and Hao Yu. Maximum entropy based phrase reordering for hierarchical phrase-based translation. In Proceedings of the EMNLP.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Statistical syntax-directed translation with extended domain of locality",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proceedings of the AMTA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Factored soft source syntactic constraints for hierarchical machine translation",
"authors": [
{
"first": "Zhongqiang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Rabih",
"middle": [],
"last": "Zbib",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceeedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongqiang Huang, Jacob Devlin, and Rabih Zbib. 2013. Factored soft source syntactic constraints for hierarchical machine translation. In Proceeedings of the EMNLP.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A maximum entropy word aligner for arabic-english machine translation",
"authors": [
{
"first": "Abraham",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the HLT and EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abraham Ittycheriah and Salim Roukos. 2005. A max- imum entropy word aligner for arabic-english ma- chine translation. In Proceedings of the HLT and EMNLP.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Joseph"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz Joseph Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Pro- ceedings of NAACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Edinburgh system description for the 2005 iwslt speech translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Amittai",
"middle": [],
"last": "Axelrod",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [
"Birch"
],
"last": "Mayne",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Talbot",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the IWSLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh system description for the 2005 iwslt speech translation evaluation. In Proceedings of the IWSLT.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Treeto-string alignment template for statistical machine translation",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of COLING-ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree- to-string alignment template for statistical machine translation. In Proceedings of COLING-ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Forest-based translation rule extraction",
"authors": [
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitao Mi and Liang Huang. 2008. Forest-based trans- lation rule extraction. In Proceedings of EMNLP.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Forestbased translation",
"authors": [
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL: HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitao Mi, Liang Huang, and Qun Liu. 2008. Forest- based translation. In Proceedings of ACL: HLT.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Joseph",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Joseph Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Improved inference for unlexicalized parsing",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of HLT- NAACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the AMTA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A unigram orientation model for statistical machine translation",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Tillman",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christoph Tillman. 2004. A unigram orientation model for statistical machine translation. Proceed- ings of the HLT-NAACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning linear ordering problems for better translation",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Tromble and Jason Eisner. Learning linear order- ing problems for better translation. In Proceedings of the EMNLP.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Flexible and efficient hypergraph interactions for joint hierarchical and forest-to-string decoding",
"authors": [
{
"first": "Haitao",
"middle": [],
"last": "Martin\u010dmejrek",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin\u010cmejrek, Haitao Mi, and Bowen Zhou. 2013. Flexible and efficient hypergraph interactions for joint hierarchical and forest-to-string decoding. In Proceedings of the EMNLP.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Fast translation rule matching for syntax-based statistical machine translation",
"authors": [
{
"first": "Hui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chew Lim",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1037--1045",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hui Zhang, Min Zhang, Haizhou Li, and Chew Lim Tan. 2009. Fast translation rule matching for syntax-based statistical machine translation. In Pro- ceedings of EMNLP, pages 1037-1045, Singapore, August.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Tree-to-string rule r 4 ."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Parse and translation hypergraphs. (A)"
},
"TABREF0": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>: Tree-to-string translation rules (without</td></tr><tr><td>internal structures).</td></tr><tr><td>during translation. First, the reordering model suit-</td></tr><tr><td>able for F2S translation should allow for trans-</td></tr><tr><td>lation of all meanings present in the input. Sec-</td></tr><tr><td>ond, as the process of deriving a partial transla-</td></tr><tr><td>tion hypothesis rules out some of the meanings,</td></tr><tr><td>the reordering model should restrict itself to fea-</td></tr><tr><td>tures originating in the relevant parts of the input</td></tr><tr><td>forest. Our work presents a novel technique satis-</td></tr><tr><td>fying both these requirements, while leaving the</td></tr><tr><td>disambuiguation decision up to the model using</td></tr><tr><td>global features.</td></tr><tr><td>The paper is organized as follows: We briefly</td></tr><tr><td>overview the F2S and Hiero translation models in</td></tr><tr><td>Section 2, present the proposed forest reordering</td></tr><tr><td>model in Section 3, describe our experiment and</td></tr><tr><td>present results in Section 4.</td></tr></table>",
"text": ""
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>ignores the internal structure of each rule 1 , relying</td></tr><tr><td>on frontier node annotation. On the other hand it</td></tr><tr><td>is still feasible to precompute the reordering prob-</td></tr><tr><td>abilities for all combinations of xyz.</td></tr></table>",
"text": "Example of reordering scores computed for derivations (a) and (b)."
},
"TABREF3": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Results."
}
}
}
}