{ "paper_id": "P09-1037", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:53:55.705552Z" }, "title": "Topological Ordering of Function Words in Hierarchical Phrase-based Translation", "authors": [ { "first": "Hendra", "middle": [], "last": "Setiawan", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Maryland", "location": {} }, "email": "hendra@umiacs.umd.edu" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University of Singapore", "location": {} }, "email": "" }, { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Maryland", "location": {} }, "email": "resnik@umiacs.umd.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Hierarchical phrase-based models are attractive because they provide a consistent framework within which to characterize both local and long-distance reorderings, but they also make it dif cult to distinguish many implausible reorderings from those that are linguistically plausible. Rather than appealing to annotationdriven syntactic modeling, we address this problem by observing the in uential role of function words in determining syntactic structure, and introducing soft constraints on function word relationships as part of a standard log-linear hierarchical phrase-based model. Experimentation on Chinese-English and Arabic-English translation demonstrates that the approach yields signi cant gains in performance.", "pdf_parse": { "paper_id": "P09-1037", "_pdf_hash": "", "abstract": [ { "text": "Hierarchical phrase-based models are attractive because they provide a consistent framework within which to characterize both local and long-distance reorderings, but they also make it dif cult to distinguish many implausible reorderings from those that are linguistically plausible. Rather than appealing to annotationdriven syntactic modeling, we address this problem by observing the in uential role of function words in determining syntactic structure, and introducing soft constraints on function word relationships as part of a standard log-linear hierarchical phrase-based model. Experimentation on Chinese-English and Arabic-English translation demonstrates that the approach yields signi cant gains in performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Hierarchical phrase-based models (Chiang, 2005; Chiang, 2007) offer a number of attractive benets in statistical machine translation (SMT), while maintaining the strengths of phrase-based systems (Koehn et al., 2003) . The most important of these is the ability to model long-distance reordering efciently. To model such a reordering, a hierarchical phrase-based system demands no additional parameters, since long and short distance reorderings are modeled identically using synchronous context free grammar (SCFG) rules. The same rule, depending on its topological ordering i.e. its position in the hierarchical structure can affect both short and long spans of text. Interestingly, hierarchical phrase-based models provide this bene t without making any linguistic commitments beyond the structure of the model. However, the system's lack of linguistic commitment is also responsible for one of its great-est drawbacks. In the absence of linguistic knowledge, the system models linguistic structure using an SCFG that contains only one type of nonterminal symbol 1 . As a result, the system is susceptible to the overgeneration problem: the grammar may suggest more reordering choices than appropriate, and many of those choices lead to ungrammatical translations. Chiang (2005) hypothesized that incorrect reordering choices would often correspond to hierarchical phrases that violate syntactic boundaries in the source language, and he explored the use of a constituent feature intended to reward the application of hierarchical phrases which respect source language syntactic categories. Although this did not yield signi cant improvements, Marton and Resnik (2008) and Chiang et al. (2008) extended this approach by introducing soft syntactic constraints similar to the constituent feature, but more ne-grained and sensitive to distinctions among syntactic categories; these led to substantial improvements in performance. Zollman et al. (Setiawan et al., 2007) , expanding on the previous approach by modeling pairs of function words rather than individual function words in isolation. ", "cite_spans": [ { "start": 33, "end": 47, "text": "(Chiang, 2005;", "ref_id": "BIBREF2" }, { "start": 48, "end": 61, "text": "Chiang, 2007)", "ref_id": "BIBREF3" }, { "start": 196, "end": 216, "text": "(Koehn et al., 2003)", "ref_id": "BIBREF7" }, { "start": 1066, "end": 1067, "text": "1", "ref_id": null }, { "start": 1268, "end": 1281, "text": "Chiang (2005)", "ref_id": "BIBREF2" }, { "start": 1665, "end": 1671, "text": "(2008)", "ref_id": "BIBREF1" }, { "start": 1676, "end": 1696, "text": "Chiang et al. (2008)", "ref_id": "BIBREF0" }, { "start": 1945, "end": 1968, "text": "(Setiawan et al., 2007)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X \u2192 \u03b3, \u03b1, \u223c", "eq_num": "(1)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "where X is the nonterminal symbol and \u03b3 and \u03b1 are strings that contain the combination of lexical items and nonterminals in the source and target languages, respectively. The \u223c symbol indicates that nonterminals in \u03b3 and \u03b1 are synchronized through co-indexation; i.e., nonterminals with the same index are aligned. Nonterminal correspondences are strictly one-to-one, and in practice the number of nonterminals on the right hand side is constrained to at most two, which must be separated by lexical items.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Each rule is associated with a score that is computed via the following log linear formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "w(X \u2192 \u03b3, \u03b1, \u223c ) = i f \u03bb i i (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "where f i is a feature describing one particular aspect of the rule and \u03bb i is the corresponding weight of that feature. Given\u1ebd andf as the source and target phrases associated with the rule, typical features used are rule's translation probability P trans (f |\u1ebd) and its inverse P trans (\u1ebd|f ), the lexical probability P lex (f |\u1ebd) and its inverse P lex (\u1ebd|f ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Systems generally also employ a word penalty, a phrase penalty, and target language model feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(See (Chiang, 2005) for more detailed discussion.) Our pairwise dominance model will be expressed as an additional rule-level feature in the model.", "cite_spans": [ { "start": 5, "end": 19, "text": "(Chiang, 2005)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Translation of a source sentence e using hierarchical phrase-based models is formulated as a search for the most probable derivation D * whose source side is equal to e: Suppose we want to translate the Chinese sentence in Fig. 1 into English using the following set of rules: To correctly translate the sentence, a hierarchical phrase-based system needs to model the subject noun phrase, object noun phrase and copula constructions; these are captured by rules X a , X d and X b respectively, so this set of rules represents a hierarchical phrase-based system that can be used to correctly translate the Chinese sentence. Note that the Chinese word order is correctly preserved in the subject (X a ) as well as copula constructions (X b ), and correctly inverted in the object construc-", "cite_spans": [], "ref_spans": [ { "start": 223, "end": 229, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "D * = argmax P (D), where source(D)=e. D = X i , i \u2208 1...|D|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. X a \u2192 X 1 , computers and X 1 2. X b \u2192 X 1 X 2 , X 1 are X 2 3. X c \u2192 , cell phones 4. X d \u2192 X 1 , inventions of X 1 5. X e \u2192 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "tion (X d ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, although it can generate the correct translation in Fig Figure 2 : The derivation that leads to the correct translation ", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 64, "text": "Fig", "ref_id": null }, { "start": 65, "end": 73, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": ". 3 are X a \u227a X b \u227a X c \u227a X d \u227a X e and X d \u227a X a \u227a X b \u227a X c \u227a X e ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "X d \u21d2 X a , inventions of X a \u21d2 X b , inventions of computers and X b \u21d2 X c X e ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X b ( \u227a ) \u2192 X c X d ( )", "eq_num": "(3)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X d \u2192 X 1 2 3 , inventions 3 of 2 X 1", "eq_num": "(4)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "The computation of the dominance relationship using this alignment information will be discussed in detail in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Again taking X b in Fig. 2 as a case in point, the dominance feature takes the following form:", "cite_spans": [], "ref_spans": [ { "start": 20, "end": 26, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f dom (X b ) \u2248 dom(d( , )| , )) (5) dom(d(Y L , Y R )|Y L , Y R ))", "eq_num": "(6)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "where the probability of \u227a is estimated according to the probability of d( , ). Fig. 1 . The value with the highest probability is in bold.", "cite_spans": [], "ref_spans": [ { "start": 80, "end": 86, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given two function words Y and Y , with Y preceding Y , we de ne the value of d by examining the MCAs of the two function words. all experiments, we report performance using the BLEU score (Papineni et al., 2002) , and we assess statistical signi cance using the standard bootstrapping approach introduced by (Koehn, 2004) .", "cite_spans": [ { "start": 189, "end": 212, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF12" }, { "start": 309, "end": 322, "text": "(Koehn, 2004)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "d(Y , Y ) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 leftFirst, Y \u2208 MCA R (Y ) \u2227 Y \u2208 MCA L (Y ) rightFirst, Y \u2208 MCA R (Y ) \u2227 Y \u2208 MCA L (Y ) dontCare, Y \u2208 MCA R (Y ) \u2227 Y \u2208 MCA L (Y ) neither, Y \u2208 MCA R (Y ) \u2227 Y \u2208 MCA L (Y ) (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Chinese-to-English experiments. We trained the system on the NIST MT06 Eval corpus excluding the UN data (approximately 900K sentence pairs). For the language model, we used a 5gram model with modi ed Kneser-Ney smoothing (Kneser and Ney, 1995) trained on the English side of our training data as well as portions of the Gigaword v2 English corpus. We used the NIST MT03 test set as the development set for optimizing interpolation weights using minimum error rate training (MERT; (Och and Ney, 2002) ). We carried out evaluation of the systems on the NIST 2006 evaluation test (MT06) and the NIST 2008 evaluation test (MT08) . We segmented Chinese as a preprocessing step using the Harbin segmenter (Zhao et al., 2001 ). Experimental Results", "cite_spans": [ { "start": 222, "end": 244, "text": "(Kneser and Ney, 1995)", "ref_id": "BIBREF6" }, { "start": 474, "end": 480, "text": "(MERT;", "ref_id": null }, { "start": 481, "end": 500, "text": "(Och and Ney, 2002)", "ref_id": "BIBREF10" }, { "start": 593, "end": 625, "text": "NIST 2008 evaluation test (MT08)", "ref_id": null }, { "start": 700, "end": 718, "text": "(Zhao et al., 2001", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Chinese-to-English experiments. ? 13 , taken from Chinese MT06 test set, are as follows (co-indexing subscripts represent reconstructed word alignments):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 baseline:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "military 1 intelligence 2 under observation 8 in 5 u.s. 6 air raids 7 : 3 iran 4 to 9 how 11 long 12 ? 13", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 +dom(N=128): military 1 survey 2 : 3 how 11 long 12 iran 4 under 8 air strikes 7 of the u.s 6 can 9 hold out 10 ? 13", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In addition to some lexical translation errors (e.g.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "6 should be translated to U.S. Army),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "the baseline system also makes mistakes in re- In the introduction, we discussed Chiang's (2005) constituency feature, related ideas explored by and Chiang et al. (2008) , and the target-side variation investigated", "cite_spans": [ { "start": 81, "end": 96, "text": "Chiang's (2005)", "ref_id": "BIBREF2" }, { "start": 149, "end": 169, "text": "Chiang et al. (2008)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "by Zollman et al. (2006) . These methods differ from each other mainly in terms of the speci c lin- 9 We plan to do corresponding experimentation and analysis for Arabic once we identify a suitable list of manually identi ed function words.", "cite_spans": [ { "start": 3, "end": 24, "text": "Zollman et al. (2006)", "ref_id": null }, { "start": 100, "end": 101, "text": "9", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "guistic knowledge being used and on which side the constraints are applied. Shen et al. (2008) proposed to use linguistic knowledge expressed in terms of a dependency grammar, instead of a syntactic constituency grammar. Villar et al. (2008) attempted to use syntactic constituency on both the source and target languages in the same spirit as the constituency feature, along with some simple patternbased heuristics an approach also investigated by Iglesias et al. (2009) . Aiming at improving the selection of derivations, Zhou et al. 2008 ", "cite_spans": [ { "start": 76, "end": 94, "text": "Shen et al. (2008)", "ref_id": "BIBREF14" }, { "start": 235, "end": 241, "text": "(2008)", "ref_id": "BIBREF1" }, { "start": 450, "end": 472, "text": "Iglesias et al. (2009)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In practice, one additional nonterminal symbol is used in glue rules. This is not relevant in the present discussion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note, however, that overgeneration in BTG can be viewed as a feature, not a bug, since the formalism was origi-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the term noun phrase marker here in a general sense, meaning that in this example it helps tell us that the phrase is part of an NP, not as a technical linguistic term. It serves in other grammatical roles, as well. Disambiguating the syntactic roles of function words might be a particularly useful thing to do in the model we are proposing; this is a question for future research.4 Note that for expository purposes, we designed our simple grammar to ensure that these function words appear in separate rules.5 Two function words are considered neighbors iff no other function word appears between them in the source sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The target language side is concealed for clarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": ".5% and 0.8% of the words in the Chinese and Arabic vocabularies, respectively. The validity of the frequency-based strategy, relative to linguistically-de ned function words, is discussed in Section 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In fact, we initially simply chose N = 128 for our experimentation, and then did runs with alternative N to con rm our intuitions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported in part by the ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Online large-margin training of syntactic and structural translation features", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" }, { "first": "Yuval", "middle": [], "last": "Marton", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and struc- tural translation features. In Proceedings of the", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Conference on Empirical Methods in Natural Language Processing", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Conference on Empirical Methods in Natu- ral Language Processing, pages 224233, Honolulu, Hawaii, October.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A hierarchical phrase-based model for statistical machine translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Pro- ceedings of the 43rd Annual Meeting of the Associa- tion for Computational Linguistics (ACL'05), pages 263270, Ann Arbor, Michigan, June. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Hierarchical phrase-based translation", "authors": [ { "first": "David", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Chiang. 2007. Hierarchical phrase-based trans- lation. Computational Linguistics, 33(2):201228.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A Student Handbook for Chinese Function Words", "authors": [ { "first": "Jiaying", "middle": [], "last": "Howard", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiaying Howard. 2002. A Student Handbook for Chi- nese Function Words. The Chinese University Press.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Rule ltering by pattern for ef cient hierarchical translation", "authors": [ { "first": "Gonzalo", "middle": [], "last": "Iglesias", "suffix": "" }, { "first": "Adria", "middle": [], "last": "De Gispert", "suffix": "" }, { "first": "Eduardo", "middle": [ "R" ], "last": "Banga", "suffix": "" }, { "first": "William", "middle": [], "last": "Byrne", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 12th Conference of the European Chapter of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gonzalo Iglesias, Adria de Gispert, Eduardo R. Banga, and William Byrne. 2009. Rule ltering by pattern for ef cient hierarchical translation. In Proceedings of the 12th Conference of the European Chapter of the Association of Computational Linguistics (to ap- pear).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Improved backingoff for m-gram language modeling", "authors": [ { "first": "R", "middle": [], "last": "Kneser", "suffix": "" }, { "first": "H", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1995, "venue": "Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing95", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Kneser and H. Ney. 1995. Improved backing- off for m-gram language modeling. In Proceed- ings of IEEE International Conference on Acoustics, Speech, and Signal Processing95, pages 181184, Detroit, MI, May.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Statistical phrase-based translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Franz", "middle": [ "J" ], "last": "Och", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, pages 127133, Edmonton, Alberta, Canada, May. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Statistical signi cance tests for machine translation evaluation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP 2004", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2004. Statistical signi cance tests for machine translation evaluation. In Proceedings of EMNLP 2004, pages 388395, Barcelona, Spain, July.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Soft syntactic constraints for hierarchical phrased-based translation", "authors": [ { "first": "Yuval", "middle": [], "last": "Marton", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2008, "venue": "Proceedings of The 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1003--1011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuval Marton and Philip Resnik. 2008. Soft syntac- tic constraints for hierarchical phrased-based trans- lation. In Proceedings of The 46th Annual Meet- ing of the Association for Computational Linguis- tics: Human Language Technologies, pages 1003 1011, Columbus, Ohio, June.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Discriminative training and maximum entropy models for statistical machine translation", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2002, "venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2002. Discrim- inative training and maximum entropy models for statistical machine translation. In Proceedings of 40th Annual Meeting of the Association for Com- putational Linguistics, pages 295302, Philadelphia, Pennsylvania, USA, July.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The alignment template approach to statistical machine translation", "authors": [ { "first": "Josef", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franz Josef Och and Hermann Ney. 2004. The align- ment template approach to statistical machine trans- lation. Computational Linguistics, 30(4):417449.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311318, Philadelphia, Pennsylvania, USA, July.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Ordering phrases with function words", "authors": [ { "first": "Hendra", "middle": [], "last": "Setiawan", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" }, { "first": "Haizhou", "middle": [], "last": "Li", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "712--719", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hendra Setiawan, Min-Yen Kan, and Haizhou Li. 2007. Ordering phrases with function words. In Proceedings of the 45th Annual Meeting of the As- sociation of Computational Linguistics, pages 712 719, Prague, Czech Republic, June.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A new string-to-dependency machine translation algorithm with a target dependency language model", "authors": [ { "first": "Libin", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Jinxi", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2008, "venue": "Proceedings of The 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation al- gorithm with a target dependency language model. In Proceedings of The 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 577585, Columbus, Ohio, June.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Analysing soft syntax features and heuristics for hierarchical phrase based machine translation. International Workshop on Spoken Language Translation", "authors": [ { "first": "David", "middle": [], "last": "Vilar", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Stein", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Vilar, Daniel Stein, and Hermann Ney. 2008. Analysing soft syntax features and heuristics for hi- erarchical phrase based machine translation. Inter- national Workshop on Spoken Language Translation 2008, pages 190197, October.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377404, Sep.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Increasing accuracy of chinese segmentation with strategy of multi-step processing", "authors": [ { "first": "Tiejun", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yajuan", "middle": [], "last": "Lv", "suffix": "" }, { "first": "Jianmin", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Muyun", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Fang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2001, "venue": "Journal of Chinese Information Processing", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tiejun Zhao, Yajuan Lv, Jianmin Yao, Hao Yu, Muyun Yang, and Fang Liu. 2001. Increasing accuracy of chinese segmentation with strategy of multi-step processing. Journal of Chinese Information Pro- cessing (Chinese Version), 1:1318.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Prior derivation models for formally syntax-based translation using linguistically syntactic parsing and tree kernels", "authors": [ { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Yuqing", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the ACL-08: HLT Second Workshop on Syntax and Structure in Statistical Translation (SSST-2)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bowen Zhou, Bing Xiang, Xiaodan Zhu, and Yuqing Gao. 2008. Prior derivation models for formally syntax-based translation using linguistically syntac- tic parsing and tree kernels. In Proceedings of the ACL-08: HLT Second Workshop on Syntax and Structure in Statistical Translation (SSST-2), pages 1927, Columbus, Ohio, June.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Syntax augmented machine translation via chart parsing", "authors": [ { "first": "Andreas", "middle": [], "last": "Zollmann", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Venugopal", "suffix": "" } ], "year": 2006, "venue": "Proceedings on the Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Zollmann and Ashish Venugopal. 2006. Syn- tax augmented machine translation via chart parsing. In Proceedings on the Workshop on Statistical Ma- chine Translation, pages 138141, New York City, June.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "the last century Co-indexation of nonterminals on the right hand side is indicated by subscripts, and for our examples the label of the nonterminal on the left hand side is used as the rule's unique identi er.", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": ". 2, the grammar has no mechanism to prevent the generation of an incorrect translation like the one illustrated inFig. 3. If we contrast the topological ordering of the rules inFig. 2 and Fig. 3, we observe that the difference is small but quite signi cant. Using precede symbol (\u227a) to indicate the rst operand immediately dominates the second operand in the hierarchical structure, the topological orderings inFig. 2 and", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "Fig. 3 are X a \u227a X b \u227a X c \u227a X d \u227a X e and X d \u227a X a \u227a X b \u227a X c \u227a X e , respectively. The", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "respectively. The only difference is the topological ordering of X d : in Fig. 2, it appears below most of the other hierarchical phrases, while in Fig. 3, it appears above all the other hierarchical phrases. nally introduced for bilingual analysis rather than generation of translations. Modeling the topological ordering of hierarchical phrases is computationally prohibitive, since there are literally millions of hierarchical rules in the system's automatically-learned grammar and millions of possible ways to order their application. To avoid this computational problem and still model the topological ordering, we propose to use the topological ordering of function words as a practical approximation. This is motivated by the fact that function words tend to carry crucial syntactic information in sentences, serving as the glue for content-bearing phrases. Moreover, the positional relationships between function words and content phrases tends to be xed (e.g., in English, prepositions invariably precede their object noun phrase), at least for the languages we have worked with thus far. In the Chinese sentence above, there are three function words involved: the conjunction (and), the copula (are), and the noun phrase marker (of). 3 Using the function words as approximate representations of the rules in which they appear, the topological ordering of hierarchical phrases in Fig. 2 is (and) \u227a (are) \u227a (of), while that in Fig. 3 is (of) \u227a (and) \u227a (are). 4 We can distinguish the correct and incorrect reordering choices by looking at this simple information. In the correct reordering choice, (of) appears at the lower level of the hierarchy while in the incorrect one, (of) appears at the highest level of the hierarchy.", "uris": null, "type_str": "figure", "num": null }, "FIGREF4": { "text": ". 4a illustrates the leftFirst dominance value where the intersection of the MCAs contains only the second function word ( (of)). Fig. 4b illustrates the dontCare value, where the intersection contains both function words. Similarly, rightFirst and neither are represented by an intersection that contains only Y , or by an empty intersection, respectively. Once all the d values are counted, the pairwise dominance model of neighboring function words can be estimated simply from counts using maximum likelihood.", "uris": null, "type_str": "figure", "num": null }, "FIGREF5": { "text": "Illustrations for: a) the leftFirst value, and b) the dontCare value. Thickly bordered boxes are MCAs of the function words while solid circles are the alignment points of the function words. The gray boxes are the intersections of the two MCAs.", "uris": null, "type_str": "figure", "num": null }, "FIGREF6": { "text": "Arabic-to-English experiments. We trained the system on a subset of 950K sentence pairs from the NIST MT08 training data, selected by subsampling from the full training data using amethod proposed by Kishore Papineni (personal communication). The subsampling algorithm selects sentence pairs from the training data in a way that seeks reasonable representation for all ngrams appearing in the test set. For the language model, we used a 5-gram model trained on the English portion of the whole training data plus portions of the Gigaword v2 corpus. We used the NIST MT03 test set as the development set for optimizing the interpolation weights using MERT. We carried out the evaluation of the systems on the NIST 2006 evaluation set (MT06) and the NIST 2008 evaluation set (MT08). Arabic source text was preprocessed by separating clitics, the deniteness marker, and the future tense marker from their stems.", "uris": null, "type_str": "figure", "num": null }, "FIGREF7": { "text": "7", "uris": null, "type_str": "figure", "num": null }, "FIGREF8": { "text": "proposed prior derivation models utilizing syntactic annotation of the source language, which can be seen as smoothing the probabilities of hierarchical phrase features. A key point is that the model we have introduced in this paper does not require the linguistic supervision needed in most of this prior work. We estimate the parameters of our model from parallel text without any linguistic annotation. That said, we would emphasize that our approach is, in fact, motivated in linguistic terms by the role of function words in natural language syntax. 10 Conclusion We have presented a pairwise dominance model to address reordering issues that are not handled particularly well by standard hierarchical phrasebased modeling. In particular, the minimal linguistic commitment in hierarchical phrase-based models renders them susceptible to overgeneration of reordering choices. Our proposal handles the overgeneration problem by identifying hierarchical phrases with function words and by using function word relationships to incorporate soft constraints on topological orderings. Our experimental results demonstrate that introducing the pairwise dominance model into hierarchical phrase-based modeling improves performance signi cantly in large-scale Chinese-to-English and Arabic-to-English translation tasks.", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "text": "took a complementary approach, constraining the application of hierarchical rules to respect syntactic boundaries in the target language syntax. Whether the focus is on constraints from the source language or the target language, the main ingredient in both previous approaches is the idea of constraining the spans of hierarchical phrases to respect syntactic boundaries.", "num": null, "html": null, "type_str": "table", "content": "
logical ordering choices should improve the sys-
tem. Although related to previous proposals (cor-
rect topological orderings lead to correct spans
and vice versa), our proposal incorporates broader
context and is structurally more aware, since we
look at the topological ordering of a phrase relative
to other phrases, rather than modeling additional
properties of a phrase in isolation. In addition, our
proposal requires no monolingual parsing or lin-
guistically informed syntactic modeling for either
the source or target language.
The key to our approach is the observation that
we can approximate the topological ordering of
hierarchical phrases via the topological ordering
of function words. We introduce a statistical re-
ordering model that we call the pairwise domi-
nance model, which characterizes reorderings of
phrases around a pair of function words. In mod-
eling function words, our model can be viewed as
a successor to the function words-centric reorder-
ing model
In this paper, we pursue a different approach
to improving reordering choices in a hierarchical
phrase-based model. Instead of biasing the model
toward hierarchical phrases whose spans respect
syntactic boundaries, we focus on the topologi-
cal ordering of phrases in the hierarchical struc-
ture. We conjecture that since incorrect reorder-
ing choices correspond to incorrect topological or-
derings, boosting the probability of correct topo-
" }, "TABREF3": { "text": ", computers and X c are X d \u21d2 X d , computers and cell phones are X d \u21d2 X e , computers and cell phones are inventions of X e \u21d2 , computers and cell phones are inventions of the last century", "num": null, "html": null, "type_str": "table", "content": "
????? X X X X X z 9
computers and cell phonesareinventions of the last century
Figure 1: A running example of Chinese-to-English translation.
X a \u21d2X b , computers and X b
\u21d2X cX d
4Pairwise Dominance Model
Our example suggests that we may be able to im-
prove the translation model's sensitivity to correct
versus incorrect reordering choices by modeling
the topological ordering of function words. We do
so by introducing a predicate capturing the domi-
nance relationship in a derivation between pairs of
neighboring function words. 5
Let us de ne a predicate d(Y , Y ) that takes
two function words as input and outputs one of
" }, "TABREF4": { "text": "inventions of computers and X c are X e \u21d2 X e , inventions of computers and cell phones are X e \u21d2 , inventions of computers and cell phones are the last century Figure 3: The derivation that leads to the incorrect translation four values: {leftFirst, rightFirst, dontCare, nei-ther}, where Y appears to the left of Y in the source sentence. The value leftFirst indicates that in the derivation's topological ordering, Y precedes Y (i.e. Y dominates Y in the hierarchical structure), while rightFirst indicates that Y dominates Y . InFig. 2, d(Y , Y ) = leftFirst for Y = the copula (are) and Y = the noun", "num": null, "html": null, "type_str": "table", "content": "
phrase marker(of).
The dontCare and neither values capture two
additional relationships: dontCare indicates that
the topological ordering of the function words is
exible, and neither indicates that the topologi-
cal ordering of the function words is disjoint. The
former is useful in cases where the hierarchical
phrases suggest the same kind of reordering, and
therefore restricting their topological ordering is
not necessary. This is illustrated in Fig. 2 by the
pair(and) and the copula(are), where putting
either one above the other does not change the -
nal word order. The latter is useful in cases where
the two function words do not share a same parent.
Formally, this model requires several changes in
the design of the hierarchical phrase-based system.
" }, "TABREF7": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
illustrates es-
" }, "TABREF8": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
sum-
" }, "TABREF9": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
: Experimental results on Chinese-to-
English translation with the pairwise dominance
model (dom) of different N . The baseline (the
rst line) is the original hierarchical phrase-based
system. Statistically signi cant results (p < 0.01)
over the baseline are in bold.
MT06MT08
baseline41.5640.06
+dom(N = 32)41.6640.26
+dom(N = 64)42.0340.73
+dom(N = 128)42.6641.08
+dom(N = 256)42.2840.69
+dom(N = 512)41.9740.95
+dom(N = 1024) 42.05 40.55
+dom(N = 2048) 42.48 41.47
" }, "TABREF10": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
:Experimental results on Arabic-to-
English translation with the pairwise dominance
model (dom) of different N . The baseline (the
rst line) is the original hierarchical phrase-based
system. Statistically signi cant results over the
baseline (p < 0.01) are in bold.
8Discussion and Future Work
The results in both sets of experiments show con-
sistently that we have achieved a signi cant gains
by modeling the topological ordering of function
words. When we visually inspect and compare
the outputs of our system with those of the base-
line, we observe that improved BLEU score often
corresponds to visible improvements in the sub-
jective translation quality. For example, the trans-
lations for the Chinese sentence12 : 3
456789101112
" } } } }