ACL-OCL / Base_JSON /prefixN /json /N10 /N10-1040.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N10-1040",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:50:48.378698Z"
},
"title": "Improving Phrase-Based Translation with Prototypes of Short Phrases",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Liberato",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pittsburgh",
"location": {}
},
"email": ""
},
{
"first": "Behrang",
"middle": [],
"last": "Mohit",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pittsburgh",
"location": {}
},
"email": "behrang@cs.pitt.edu"
},
{
"first": "Rebecca",
"middle": [],
"last": "Hwa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pittsburgh",
"location": {}
},
"email": "hwa@cs.pitt.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We investigate methods of generating additional bilingual phrase pairs for a phrasebased decoder by translating short sequences of source text. Because our translation task is more constrained, we can use a model that employs more linguistically rich features than a traditional decoder. We have implemented an example of this approach. Experimental results suggest that the phrase pairs produced by our method are useful to the decoder, and lead to improved sentence translations.",
"pdf_parse": {
"paper_id": "N10-1040",
"_pdf_hash": "",
"abstract": [
{
"text": "We investigate methods of generating additional bilingual phrase pairs for a phrasebased decoder by translating short sequences of source text. Because our translation task is more constrained, we can use a model that employs more linguistically rich features than a traditional decoder. We have implemented an example of this approach. Experimental results suggest that the phrase pairs produced by our method are useful to the decoder, and lead to improved sentence translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recently, there have been a number of successful attempts at improving phrase-based statistical machine translation by exploiting linguistic knowledge such as morphology, part-of-speech tags, and syntax. Many translation models use such knowledge before decoding (Xia and McCord, 2004) and during decoding Gimpel and Smith, 2009; Chiang et al., 2009) , but they are limited to simpler features for practical reasons, often restricted to conditioning left-toright on the target sentence. Traditionally, n-best rerankers (Shen et al., 2004) have applied expensive analysis after the translation process, on both the source and target side, though they suffer from being limited to whatever is on the n-best list (Hasan et al., 2007) .",
"cite_spans": [
{
"start": 263,
"end": 285,
"text": "(Xia and McCord, 2004)",
"ref_id": "BIBREF14"
},
{
"start": 306,
"end": 329,
"text": "Gimpel and Smith, 2009;",
"ref_id": "BIBREF4"
},
{
"start": 330,
"end": 350,
"text": "Chiang et al., 2009)",
"ref_id": "BIBREF3"
},
{
"start": 519,
"end": 538,
"text": "(Shen et al., 2004)",
"ref_id": "BIBREF12"
},
{
"start": 710,
"end": 730,
"text": "(Hasan et al., 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We argue that it can be desirable to pre-translate parts of the source text before sentence-level decoding begins, using a richer model that would typically be out of reach during sentence-level decoding. In this paper, we describe a particular method of generating additional bilingual phrase pairs for a new source text, using what we call phrase prototypes, which are are learned from bilingual training data. Our goal is to generate improved translations of relatively short phrase pairs to provide the SMT decoder with better phrasal choices. We validate the idea through experiments on Arabic-English translation. Our method produces a 1.3 BLEU score increase (3.3% relative) on a test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Re-ranking tends to use expensive features of the entire source and target sentences, s and t, and alignments, a, to produce a score for the translation. We will call this scoring function \u03c6(s, t, a). While \u03c6(\u2022) might capture quite a bit of linguistic information, it can be problematic to use this function for decoding directly. This is due to both the expense of computing it, and the difficulty in using it to guide the decoder's search. For example, a choice of \u03c6(\u2022) that relies on a top-down parser is difficult to integrate into a left-to-right decoder (Charniak et al., 2003) .",
"cite_spans": [
{
"start": 560,
"end": 583,
"text": "(Charniak et al., 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "Our idea is to use an expensive scoring function to guide the search for potential translations for part of a source sentence, S, even if translating all of it isn't feasible. We can then provide these translations to the decoder, along with their scores, to incorporate them as it builds the complete translation of S. This differs from approaches such as (Och and Ney, 2004) because we generate new phrase pairs in isolation, rather than incorporating everything into the sentence-level decoder. The baseline system is the Moses phrase-based translation system .",
"cite_spans": [
{
"start": 357,
"end": 376,
"text": "(Och and Ney, 2004)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "For this work, we consider a scoring function based on part-of-speech (POS) tags, \u03c6 P OS (\u2022). It operates in two steps: it converts the source and target phrases, plus alignments, into what we call a phrase prototype, then assigns a score to it based on how common that prototype was during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of Our Scoring Function",
"sec_num": "2.1"
},
{
"text": "Each phrase pair prototype is a tuple containing the source prototype, target prototype, and alignment prototype, respectively. The source and target prototypes are a mix of surface word forms and POS tags, such as the Arabic string NN Al JJ , or the English string NN NN . For example, the source and target prototypes above might be used in the phrase prototype NN 0 Al JJ 1 , NN 1 NN 0 , with the alignment prototype specified implicitly via subscripts for brevity. For simplicity, the alignment prototype is restricted to allow a source or target word/tag to be unaligned, plus 1:1 alignments between them. We do not consider 1:many, many:1, or many:many alignments in this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of Our Scoring Function",
"sec_num": "2.1"
},
{
"text": "For any input s, t, a , it is possible to construct potentially many phrase prototypes from it by choosing different subsets of the source and target words to represent as POS tags. In the above example, the Arabic determiner Al could be converted into an unaligned POS tag, making the source prototype NN DT JJ . For this work, we convert all aligned words into POS tags. As a practical concern, we insist that unaligned words are always kept as their surface form.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of Our Scoring Function",
"sec_num": "2.1"
},
{
"text": "\u03c6 P OS (s, t, a) assign a score based on the probability of the resulting prototypes; more likely prototypes should yield higher scores. We choose:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of Our Scoring Function",
"sec_num": "2.1"
},
{
"text": "\u03c6 P OS (s, t, a) = p(SP, AP |T P ) \u2022 p(T P, AP |SP )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of Our Scoring Function",
"sec_num": "2.1"
},
{
"text": "where SP is the source prototype constructed from s, t, a. Similarly, T P and AP are the target and alignment prototypes, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of Our Scoring Function",
"sec_num": "2.1"
},
{
"text": "To compute \u03c6 P OS (\u2022), we must build a model for each of p(SP, AP |T P ) and p(T P, AP |SP ). To do this, we start with a corpus of aligned, POS-tagged bilingual text. We then find phrases that are consistent with (Koehn et al., 2003) . As we extract these phrase pairs, we convert each into a phrase proto-type by replacing surface forms with POS tags for all aligned words in the prototype.",
"cite_spans": [
{
"start": 214,
"end": 234,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Description of Our Scoring Function",
"sec_num": "2.1"
},
{
"text": "After we have processed the bilingual training text, we have collected a set of phrase prototypes and a count of how often each was observed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of Our Scoring Function",
"sec_num": "2.1"
},
{
"text": "To generate phrases, we scan through the source text to be translated, finding any span of source words that matches the source prototype of at least one phrase prototype. For each such phrase, and for each phrase prototype which it matches, we generate all target phrases which also match the target and alignment prototypes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating New Phrases",
"sec_num": "2.2"
},
{
"text": "To do this, we use a word-to-word dictionary to generate all target phrases which honor the alignments required by the alignment prototype. For each source word which is aligned to a POS tag in the target prototype, we substitute all single-word translations in our dictionary 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating New Phrases",
"sec_num": "2.2"
},
{
"text": "For each target phrase that we generate, we must ensure that it matches the target prototype. We give each phrase to a POS tagger, and check the resulting tags against any tags in the target prototype. If there are no mismatches, then the phrase pair is retained for the phrase table, else it is discarded. In the latter case, \u03c6 P OS (\u2022) would assign this pair a score of zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating New Phrases",
"sec_num": "2.2"
},
{
"text": "In the Moses phrase table, each entry has four parameters: two lexical weights, and the two conditional phrase probabilities p(s|t) and p(t|s). While the lexical weights can be computed using the standard method (Koehn et al., 2003) , estimating the conditional phrase probabilities is not straightforward for our approach because they are not observed in bilingual training data. Instead, we estimate the maximum conditional phrase probabilities that would be assigned by the sentence-level decoder for this phrase pair, as if it had generated the target string from the source string using the baseline phrase table 2 . To do this efficiently, we use some simplifying assumptions: we do not restrict how often a source word is used during the translation, and we ignore distortion / reordering costs. These admit a simple dynamic programming solution.",
"cite_spans": [
{
"start": 212,
"end": 232,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Phrase Weights",
"sec_num": "2.3"
},
{
"text": "We must also include the score from \u03c6 P OS (\u2022), to give the decoder some idea of our confidence in the generated phrase pair. We include the phrase pair's score as an additional weight in the phrase table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Phrase Weights",
"sec_num": "2.3"
},
{
"text": "The Linguistic Data Consortium Arabic-English corpus2 3 is used to train the baseline MT system (34K sentences, about one million words), and to learn phrase prototypes. The LDC multi-translation Arabic-English corpus (NIST2003) 4 is used for tuning and testing; the tuning set consists of the first 500 sentences, and the test set consists of the next 500 sentences. The language model is a 4-gram model built from the English side of the parallel corpus, plus the English side of the wmt07 German-English and French-English news commentary data. The baseline translation system is Moses , with the msd-bidirectional-fe reordering model. Evaluation is done using the BLEU (Papineni et al., 2001 ) metric with four references. All text is lowercased before evaluation; recasing is not used. We use the Stanford Arabic POS Tagging system, based on (Toutanova et al., 2003) 5 . The word-to-word dictionary that is used in the phrase generation step of our method is extracted from the highest-scoring translations for each source word in the baseline phrase table. For some closedclass words, we use a small, manually constructed dictionary to reduce the noise in the phrase table that exists for very common words. We use this in place of a stand-alone dictionary to reduce the need for additional resources.",
"cite_spans": [
{
"start": 673,
"end": 695,
"text": "(Papineni et al., 2001",
"ref_id": "BIBREF11"
},
{
"start": 847,
"end": 871,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "To see the effect on the BLEU score of the resulting sentence-level translation, we vary the amount of bilingual data used to build the phrase prototypes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "(approximately) no difference between building the generated phrase using the baseline phrase table, or using our generated phrase pair directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "3 Catalogue numbers LDC2004T17 and LDC2004T18 4 Catalogue number: LDC2003T18 As we increase the amount of training data, we expect that the phrase prototype extraction algorithm will observe more phrase prototypes. This will cause it to generate more phrase pairs, introducing both more noise and more good phrases into the phrase table. Because quite a few phrase prototypes are built in any case, we require that each is seen at least three times before we use it to generate phrases. Phrase prototypes seen fewer times than this are discarded before phrase generation begins. Varying this minimum support parameter does not affect the results noticeably.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The results on the tuning set are seen in Figure 1 . The BLEU score on the tuning set generally improves as the amount of bilingual training data is increased, even as the percentage of generated phrases approaches 100%. Manual inspection of the phrase pairs reveals that many are badly formed; this suggests that the language model is doing its job in filtering out disfluent phrases.",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 50,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Using the first 5,000 bilingual training sentences to train our model, we compare our method to the baseline moses system. Each system was tuned via MERT (Och, 2003) before running it on the test set. The tuned baseline system scores 38.45. Including our generated phrases improves this by 1.3 points to 39.75. This is a slightly smaller gain than exists in the tuning set experiment, due in part that we did not run MERT for experiment shown in Figure 1 .",
"cite_spans": [
{
"start": 154,
"end": 165,
"text": "(Och, 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 446,
"end": 454,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "As one might expect, generated phrases both help and hurt individual translations. A sentence that can be translated starting with the phrase \"korea added that the syrian prime minister\" is translated by the baseline system as \"korean | foreign minister | added | that | the syrian\". While \"the syrian foreign minister\" is an unambiguous source phrase, the baseline phrase table does not include it; the language and reordering models must stitch the translation together. Ours method generates \"the syrian foreign minister\" directly. Generated phrases are not always correct. For example, a generated phrase causes our system to choose \"europe role\", while the baseline system picks \"the role of | europe\". While the same prototype is used (correctly) for reordering Arabic \"NN 0 JJ 1 \" constructs into English as \"NN 1 NN 0 \" in many instances, it fails in this case. The language model shares the blame, since it does not prefer the correct phrase over the shorter one. In contrast, a 5-gram language model based on the LDC Web IT 5-gram counts 6 prefers the correct phrase.",
"cite_spans": [
{
"start": 1048,
"end": 1049,
"text": "6",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We have shown that translating short spans of source text, and providing the results to a phrase-based SMT decoder can improve sentence-level machine translation. Further, it permits us to use linguistically informed features to guide the generation of new phrase pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Since we required that all unaligned target words are kept as surface forms in the target prototype, this is sufficient. If we did not insist this, then we might be faced with the unenviable task of choosing a target languange noun, without further guidance from the source text.2 If we use these probabilities for our generated phrase pair's probability estimates, then the sentence-level decoder would see",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported by U.S. National Science Foundation Grant IIS-0745914. We thank the anonymous reviewers for their suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "CCG supertags in factored statistical machine translation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of the Second Workshop on SMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Birch, M. Osborne, and P. Koehn. 2007. CCG su- pertags in factored statistical machine translation. In Proc. of the Second Workshop on SMT.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Catalogue number LDC2006T13",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Catalogue number LDC2006T13.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Syntaxbased language models for statistical machine translation",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Yamada",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of MT Summit IX",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Charniak, K. Knight, and K. Yamada. 2003. Syntax- based language models for statistical machine transla- tion. In Proceedings of MT Summit IX.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "11,001 new features for statistical machine translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2009,
"venue": "NAACL '09: Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Assoc. for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Chiang, K. Knight, and W. Wang. 2009. 11,001 new features for statistical machine translation. In NAACL '09: Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Assoc. for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Feature-rich translation by quasi-synchronous lattice parsing",
"authors": [
{
"first": "K",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Gimpel and N.A. Smith. 2009. Feature-rich transla- tion by quasi-synchronous lattice parsing. In Proc. of EMNLP.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Are very large nbest lists useful for SMT?",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. NAACL, Short paper",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Hasan, R. Zens, and H. Ney. 2007. Are very large n- best lists useful for SMT? Proc. NAACL, Short paper, pages 57-60.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Factored translation models",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hoang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "868--876",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn and H. Hoang. 2007. Factored translation models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning (EMNLP-CoNLL), pages 868-876.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn, F.J. Och, and D. Marcu. 2003. Statisti- cal phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, page 54.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
}
],
"year": 2007,
"venue": "Annual meeting-Association for Computational Linguistics",
"volume": "45",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Annual meeting-Association for Computational Lin- guistics, volume 45, page 2.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The alignment template approach to statistical machine translation",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "4",
"pages": "417--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. J. Och and H. Ney. 2004. The alignment template approach to statistical machine translation. Computa- tional Linguistics, 30(4):417-449.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the 41st Annual Meeting on Assoc. for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F.J. Och. 2003. Minimum error rate training in statisti- cal machine translation. In Proc. of the 41st Annual Meeting on Assoc. for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W",
"middle": [
"J"
],
"last": "Zhu",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of the 40th Annual Meeting of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Papineni, S. Roukos, T. Ward, and W.J. Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. In Proc. of the 40th Annual Meeting of Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Discriminative reranking for machine translation",
"authors": [
{
"first": "L",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sarkar",
"suffix": ""
},
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Joint HLT and NAACL Conference (HLT 04)",
"volume": "",
"issue": "",
"pages": "177--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Shen, A. Sarkar, and F.J. Och. 2004. Discrimina- tive reranking for machine translation. In Proceedings of the Joint HLT and NAACL Conference (HLT 04), pages 177-184.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Feature-rich part-of-speech tagging with a cyclic dependency network",
"authors": [
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "NAACL '03: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Toutanova, D. Klein, C. D. Manning, and Y. Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In NAACL '03: Proceed- ings of the 2003 Conference of the North American Chapter of the Association for Computational Linguis- tics on Human Language Technology.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving a statistical mt system with automatically learned rewrite patterns",
"authors": [
{
"first": "F",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mccord",
"suffix": ""
}
],
"year": 2004,
"venue": "COLING '04: Proceedings of the 20th international conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Xia and M. McCord. 2004. Improving a statistical mt system with automatically learned rewrite patterns. In COLING '04: Proceedings of the 20th international conference on Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Bilingual training size vs. BLEU score (middle line, left axis) and phrase table composition (top line, right axis) on Arabic Development Set. The baseline BLEU score (bottom line) is included for comparison.",
"num": null
}
}
}
}