ACL-OCL / Base_JSON /prefixP /json /P07 /P07-1039.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P07-1039",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:51:22.106921Z"
},
"title": "Bootstrapping Word Alignment via Word Packing",
"authors": [
{
"first": "Yanjun",
"middle": [],
"last": "Ma",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Dublin City University Glasnevin",
"location": {
"settlement": "Dublin 9",
"country": "Ireland"
}
},
"email": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Stroppa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Dublin City University Glasnevin",
"location": {
"settlement": "Dublin 9",
"country": "Ireland"
}
},
"email": "nstroppa@computing.dcu.ie"
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Dublin City University Glasnevin",
"location": {
"settlement": "Dublin 9",
"country": "Ireland"
}
},
"email": "away@computing.dcu.ie"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce a simple method to pack words for statistical word alignment. Our goal is to simplify the task of automatic word alignment by packing several consecutive words together when we believe they correspond to a single word in the opposite language. This is done using the word aligner itself, i.e. by bootstrapping on its output. We evaluate the performance of our approach on a Chinese-to-English machine translation task, and report a 12.2% relative increase in BLEU score over a state-of-the art phrasebased SMT system.",
"pdf_parse": {
"paper_id": "P07-1039",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce a simple method to pack words for statistical word alignment. Our goal is to simplify the task of automatic word alignment by packing several consecutive words together when we believe they correspond to a single word in the opposite language. This is done using the word aligner itself, i.e. by bootstrapping on its output. We evaluate the performance of our approach on a Chinese-to-English machine translation task, and report a 12.2% relative increase in BLEU score over a state-of-the art phrasebased SMT system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic word alignment can be defined as the problem of determining a translational correspondence at word level given a parallel corpus of aligned sentences. Most current statistical models (Brown et al., 1993; Vogel et al., 1996; Deng and Byrne, 2005) treat the aligned sentences in the corpus as sequences of tokens that are meant to be words; the goal of the alignment process is to find links between source and target words. Before applying such aligners, we thus need to segment the sentences into words -a task which can be quite hard for languages such as Chinese for which word boundaries are not orthographically marked. More importantly, however, this segmentation is often performed in a monolingual context, which makes the word alignment task more difficult since different languages may realize the same concept using varying numbers of words (see e.g. (Wu, 1997) ). Moreover, a segmentation considered to be \"good\" from a monolingual point of view may be unadapted for training alignment models.",
"cite_spans": [
{
"start": 193,
"end": 213,
"text": "(Brown et al., 1993;",
"ref_id": "BIBREF2"
},
{
"start": 214,
"end": 233,
"text": "Vogel et al., 1996;",
"ref_id": "BIBREF24"
},
{
"start": 234,
"end": 255,
"text": "Deng and Byrne, 2005)",
"ref_id": "BIBREF3"
},
{
"start": 871,
"end": 881,
"text": "(Wu, 1997)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although some statistical alignment models allow for 1-to-n word alignments for those reasons, they rarely question the monolingual tokenization and the basic unit of the alignment process remains the word. In this paper, we focus on 1-to-n alignments with the goal of simplifying the task of automatic word aligners by packing several consecutive words together when we believe they correspond to a single word in the opposite language; by identifying enough such cases, we reduce the number of 1-to-n alignments, thus making the task of word alignment both easier and more natural.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach consists of using the output from an existing statistical word aligner to obtain a set of candidates for word packing. We evaluate the reliability of these candidates, using simple metrics based on co-occurence frequencies, similar to those used in associative approaches to word alignment (Kitamura and Matsumoto, 1996; Melamed, 2000; Tiedemann, 2003) . We then modify the segmentation of the sentences in the parallel corpus according to this packing of words; these modified sentences are then given back to the word aligner, which produces new alignments. We evaluate the validity of our approach by measuring the influence of the alignment process on a Chinese-to-English Machine Translation (MT) task.",
"cite_spans": [
{
"start": 303,
"end": 333,
"text": "(Kitamura and Matsumoto, 1996;",
"ref_id": "BIBREF5"
},
{
"start": 334,
"end": 348,
"text": "Melamed, 2000;",
"ref_id": "BIBREF13"
},
{
"start": 349,
"end": 365,
"text": "Tiedemann, 2003)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is organized as follows. In Section 2, we study the case of 1-ton word alignment. Section 3 introduces an automatic method to pack together groups of consecutive 304 words based on the output from a word aligner. In Section 4, the experimental setting is described. In Section 5, we evaluate the influence of our method on the alignment process on a Chinese to English MT task, and experimental results are presented. Section 6 concludes the paper and gives avenues for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1: 0 1: 1 1: 2 1: 3 1: n (n > 3) IWSLT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The same concept can be expressed in different languages using varying numbers of words; for example, a single Chinese word may surface as a compound or a collocation in English. This is frequent for languages as different as Chinese and English. To quickly (and approximately) evaluate this phenomenon, we trained the statistical IBM wordalignment model 4 (Brown et al., 1993), 1 using the GIZA++ software (Och and Ney, 2003) for the following language pairs: Chinese-English, Italian-English, and Dutch-English, using the IWSLT-2006 corpus (Takezawa et al., 2002; Paul, 2006) for the first two language pairs, and the Europarl corpus (Koehn, 2005) for the last one. These asymmetric models produce 1-to-n alignments, with n \u2265 0, in both directions. Here, it is important to mention that the segmentation of sentences is performed totally independently of the bilingual alignment process, i.e. it is done in a monolingual context. For European languages, we apply the maximum-entropy based tokenizer of OpenNLP 2 ; the Chinese sentences were human segmented (Paul, 2006) .",
"cite_spans": [
{
"start": 407,
"end": 426,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF14"
},
{
"start": 542,
"end": 565,
"text": "(Takezawa et al., 2002;",
"ref_id": "BIBREF21"
},
{
"start": 566,
"end": 577,
"text": "Paul, 2006)",
"ref_id": "BIBREF18"
},
{
"start": 636,
"end": 649,
"text": "(Koehn, 2005)",
"ref_id": "BIBREF8"
},
{
"start": 1059,
"end": 1071,
"text": "(Paul, 2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Case of 1-to-n Alignment",
"sec_num": "2"
},
{
"text": "In Table 1 , we report the frequencies of the different types of alignments for the various languages and directions. As expected, the number of 1: n 1 More specifically, we performed 5 iterations of Model 1, 5 iterations of HMM, 5 iterations of Model 3, and 5 iterations of Model 4.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Case of 1-to-n Alignment",
"sec_num": "2"
},
{
"text": "2 http://opennlp.sourceforge.net/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Case of 1-to-n Alignment",
"sec_num": "2"
},
{
"text": "alignments with n = 1 is high for Chinese-English ( 40%), and significantly higher than for the European languages. The case of 1-to-n alignments is, therefore, obviously an important issue when dealing with Chinese-English word alignment. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Case of 1-to-n Alignment",
"sec_num": "2"
},
{
"text": "Fertility-based models such as IBM models 3, 4, and 5 allow for alignments between one word and several words (1-to-n or 1: n alignments in what follows), in particular for the reasons specified above. They can be seen as extensions of the simpler IBM models 1 and 2 (Brown et al., 1993) . Similarly, Deng and Byrne (2005) propose an HMM framework capable of dealing with 1-to-n alignment, which is an extension of the original model of (Vogel et al., 1996) . However, these models rarely question the monolingual tokenization, i.e. the basic unit of the alignment process is the word. 4 One alternative to extending the expressivity of one model (and usually its complexity) is to focus on the input representation; in particular, we argue that the alignment process can benefit from a simplification of the input, which consists of trying to reduce the number of 1-to-n alignments to consider. Note that the need to consider segmentation and alignment at the same time is also mentioned in (Tiedemann, 2003) , and related issues are reported in (Wu, 1997) .",
"cite_spans": [
{
"start": 267,
"end": 287,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF2"
},
{
"start": 301,
"end": 322,
"text": "Deng and Byrne (2005)",
"ref_id": "BIBREF3"
},
{
"start": 437,
"end": 457,
"text": "(Vogel et al., 1996)",
"ref_id": "BIBREF24"
},
{
"start": 992,
"end": 1009,
"text": "(Tiedemann, 2003)",
"ref_id": "BIBREF22"
},
{
"start": 1047,
"end": 1057,
"text": "(Wu, 1997)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Treatment of 1-to-n Alignments",
"sec_num": "2.1"
},
{
"text": "While in this paper, we focus on Chinese-English, the method proposed is applicable to any language pair -even for closely related languages, we expect improvements to be seen. The notation however assume Chinese-English MT. Given a Chinese sentence c J 1 consisting of J words {c 1 , . . . , c J } and an English sentence e I 1 consisting of I words {e 1 , . . . , e I }, A C\u2192E (resp. A E\u2192C ) will denote a Chinese-to-English (resp. an English-to-Chinese) word alignment between c J 1 and e I 1 . Since we are primarily interested in 1-to-n alignments, A C\u2192E can be represented as a set of pairs a j = c j , E j denoting a link between one single Chinese word c j and a few English words E j (and similarly for A E\u2192C ). The set E j is empty if the word c j is not aligned to any word in e I 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.2"
},
{
"text": "Our approach consists of packing consecutive words together when we believe they correspond to a single word in the other language. This bilingually motivated packing of words changes the basic unit of the alignment process, and simplifies the task of automatic word alignment. We thus minimize the number of 1-to-n alignments in order to obtain more comparable segmentations in the two languages. In this section, we present an automatic method that builds upon the output from an existing automatic word aligner. More specifically, we (i) use a word aligner to obtain 1-to-n alignments, (ii) extract candidates for word packing, (iii) estimate the reliability of these candidates, (iv) replace the groups of words to pack by a single token in the parallel corpus, and (v) re-iterate the alignment process using the updated corpus. The first three steps are performed in both directions, and produce two bilingual dictionaries (source-target and target-source) of groups of words to pack.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Word Repacking",
"sec_num": "3"
},
{
"text": "In the following, we assume the availability of an automatic word aligner that can output alignments A C\u2192E and A E\u2192C for any sentence pair (c J 1 , e I 1 ) in a parallel corpus. We also assume that A C\u2192E and A E\u2192C contain 1: n alignments. Our method for repacking words is very simple: whenever a single word is aligned with several consecutive words, they are considered candidates for repacking. Formally, given an alignment A C\u2192E between c J 1 and e I 1 , if",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Extraction",
"sec_num": "3.1"
},
{
"text": "a j = c j , E j \u2208 A C\u2192E , with E j = {e j 1 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Extraction",
"sec_num": "3.1"
},
{
"text": ". . , e jm } and \u2200k \u2208 1, m \u2212 1 , j k+1 \u2212 j k = 1, then the alignment a j between c j and the sequence of words E j is considered a candidate for word repacking. The same goes for A E\u2192C . Some examples of such 1to-n alignments between Chinese and English (in both directions) we can derive automatically are displayed in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 320,
"end": 328,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Candidate Extraction",
"sec_num": "3.1"
},
{
"text": "\u767d\u8461\u8404\u9152: white wine \u767e\u8d27\u516c\u53f8: department store \u62b1\u6b49: excuse me \u62a5\u8b66: call the police \u676f: cup of \u5fc5\u987b: have to closest: \u6700 \u8fd1 fifteen: \u5341 \u4e94 fine: \u5f88 \u597d flight: \u6b21 \u822a\u73ed get: \u62ff \u5230 here: \u5728 \u8fd9\u91cc Figure 1 : Example of 1-to-n word alignments between Chinese and English",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 173,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Candidate Extraction",
"sec_num": "3.1"
},
{
"text": "Of course, the process described above is errorprone and if we want to change the input to give to the word aligner, we need to make sure that we are not making harmful modifications. 5 We thus additionally evaluate the reliability of the candidates we extract and filter them before inclusion in our bilingual dictionary. To perform this filtering, we use two simple statistical measures. In the following, a j = c j , E j denotes a candidate. The first measure we consider is co-occurrence frequency (COOC(c j , E j )), i.e. the number of times c j and E j co-occur in the bilingual corpus. This very simple measure is frequently used in associative approaches (Melamed, 1997; Tiedemann, 2003) . The second measure is the alignment confidence, defined as",
"cite_spans": [
{
"start": 184,
"end": 185,
"text": "5",
"ref_id": null
},
{
"start": 663,
"end": 678,
"text": "(Melamed, 1997;",
"ref_id": "BIBREF12"
},
{
"start": 679,
"end": 695,
"text": "Tiedemann, 2003)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Reliability Estimation",
"sec_num": "3.2"
},
{
"text": "AC(a j ) = C(a j ) COOC(c j , E j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Reliability Estimation",
"sec_num": "3.2"
},
{
"text": ",",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Reliability Estimation",
"sec_num": "3.2"
},
{
"text": "where C(a j ) denotes the number of alignments proposed by the word aligner that are identical to a j . In other words, AC(a j ) measures how often the aligner aligns c j and E j when they co-occur. We also impose that | E j | \u2264 k, where k is a fixed integer that may depend on the language pair (between 3 and 5 in practice). The rationale behind this is that it is very rare to get reliable alignment between one word and k consecutive words when k is high. The candidates are included in our bilingual dictionary if and only if their measures are above some fixed thresholds t cooc and t ac , which allow for the control of the size of the dictionary and the quality of its contents. Some other measures (including the Dice coefficient) could be considered; however, it has to be noted that we are more interested here in the filtering than in the discovery of alignment, since our method builds upon an existing aligner. Moreover, we will see that even these simple measures can lead to an improvement of the alignment process in a MT context (cf. Section 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Reliability Estimation",
"sec_num": "3.2"
},
{
"text": "Once the candidates are extracted, we repack the words in the bilingual dictionaries constructed using the method described above; this provides us with an updated training corpus, in which some word sequences have been replaced by a single token. This update is totally naive: if an entry a j = c j , E j is present in the dictionary and matches one sentence pair (c J 1 , e I 1 ) (i.e. c j and E j are respectively contained in c J",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bootstrapped Word Repacking",
"sec_num": "3.3"
},
{
"text": "1 and e I 1 ), then we replace the sequence of words E j with a single token which becomes a new lexical unit. 6 Note that this replacement occurs even if no alignment was found between c j and E j for the pair (c J 1 , e I 1 ). This is motivated by the fact that the filtering described above is quite conservative; we trust the entry a i to be correct. This update is performed in both directions. It is then possible to run the word aligner using the updated (simplified) parallel corpus, in order to get new alignments. By performing a deterministic word packing, we avoid the computation of the fertility parameters associated with fertility-based models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bootstrapped Word Repacking",
"sec_num": "3.3"
},
{
"text": "Word packing can be applied several times: once we have grouped some words together, they become the new basic unit to consider, and we can re-run the same method to get additional groupings. How-ever, we have not seen in practice much benefit from running it more than twice (few new candidates are extracted after two iterations).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bootstrapped Word Repacking",
"sec_num": "3.3"
},
{
"text": "It is also important to note that this process is bilingually motivated and strongly depends on the language pair. For example, white wine, excuse me, call the police, and cup of (cf. Figure 1 ) translate respectively as vin blanc, excusez-moi, appellez la police, and tasse de in French. Those groupings would not be found for a language pair such as French-English, which is consistent with the fact that they are less useful for French-English than for Chinese-English in a MT perspective.",
"cite_spans": [],
"ref_spans": [
{
"start": 184,
"end": 192,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bootstrapped Word Repacking",
"sec_num": "3.3"
},
{
"text": "We wanted to compare this automatic approach to manually developed resources. For this purpose, we used a dictionary built by the MT group of Harbin Institute of Technology, as a preprocessing step to Chinese-English word alignment, and motivated by several years of Chinese-English MT practice. Some examples extracted from this resource are displayed in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 356,
"end": 364,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Using Manually Developed Dictionaries",
"sec_num": "3.4"
},
{
"text": "\u6709: there is \u60f3\u8981: want to \u4e0d\u5fc5: need not \u524d\u9762: in front of \u4e00: as soon as \u770b: look at Figure 2 : Examples of entries from the manually developed dictionary 4 Experimental Setting",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 86,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Using Manually Developed Dictionaries",
"sec_num": "3.4"
},
{
"text": "The intrinsic quality of word alignment can be assessed using the Alignment Error Rate (AER) metric (Och and Ney, 2003) , that compares a system's alignment output to a set of gold-standard alignment. While this method gives a direct evaluation of the quality of word alignment, it is faced with several limitations. First, it is really difficult to build a reliable and objective gold-standard set, especially for languages as different as Chinese and English. Second, an increase in AER does not necessarily imply an improvement in translation quality (Liang et al., 2006) and vice-versa (Vilar et al., 2006) . The relationship between word alignments and their impact on MT is also investigated in (Ayan and Dorr, 2006; Lopez and Resnik, 2006; Fraser and Marcu, 2006) . Consequently, we chose to extrinsically evaluate the performance of our approach via the translation task, i.e. we measure the influence of the alignment process on the final translation output. The quality of the translation output is evaluated using BLEU (Papineni et al., 2002) .",
"cite_spans": [
{
"start": 100,
"end": 119,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF14"
},
{
"start": 554,
"end": 574,
"text": "(Liang et al., 2006)",
"ref_id": "BIBREF9"
},
{
"start": 590,
"end": 610,
"text": "(Vilar et al., 2006)",
"ref_id": "BIBREF23"
},
{
"start": 701,
"end": 722,
"text": "(Ayan and Dorr, 2006;",
"ref_id": "BIBREF0"
},
{
"start": 723,
"end": 746,
"text": "Lopez and Resnik, 2006;",
"ref_id": "BIBREF10"
},
{
"start": 747,
"end": 770,
"text": "Fraser and Marcu, 2006)",
"ref_id": "BIBREF4"
},
{
"start": 1030,
"end": 1053,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.1"
},
{
"text": "The experiments were carried out using the Chinese-English datasets provided within the IWSLT 2006 evaluation campaign (Paul, 2006) , extracted from the Basic Travel Expression Corpus (BTEC) (Takezawa et al., 2002) . This multilingual speech corpus contains sentences similar to those that are usually found in phrase-books for tourists going abroad. Training was performed using the default training set, to which we added the sets de-vset1, devset2, and devset3. 7 The English side of the test set was not available at the time we conducted our experiments, so we split the development set (devset 4) into two parts: one was kept for testing (200 aligned sentences) with the rest (289 aligned sentences) used for development purposes.",
"cite_spans": [
{
"start": 119,
"end": 131,
"text": "(Paul, 2006)",
"ref_id": "BIBREF18"
},
{
"start": 191,
"end": 214,
"text": "(Takezawa et al., 2002)",
"ref_id": "BIBREF21"
},
{
"start": 433,
"end": 466,
"text": "de-vset1, devset2, and devset3. 7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.2"
},
{
"text": "As a pre-processing step, the English sentences were tokenized using the maximum-entropy based tokenizer of the OpenNLP toolkit, and case information was removed. For Chinese, the data provided were tokenized according to the output format of ASR systems, and human-corrected (Paul, 2006) . Since segmentations are human-corrected, we are sure that they are good from a monolingual point of view. Table 2 contains the various corpus statistics.",
"cite_spans": [
{
"start": 276,
"end": 288,
"text": "(Paul, 2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 397,
"end": 404,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4.2"
},
{
"text": "We use a standard log-linear phrase-based statistical machine translation system as a baseline: GIZA++ implementation of IBM word alignment model 4 (Brown et al., 1993; Och and Ney, 2003) , 8 the refinement and phrase-extraction heuristics described in (Koehn et al., 2003) (Och, 2003) using Phramer (Olteanu et al., 2006) , a 3-gram language model with Kneser-Ney smoothing trained with SRILM (Stolcke, 2002) on the English side of the training data and Pharaoh (Koehn, 2004) with default settings to decode. The log-linear model is also based on standard features: conditional probabilities and lexical smoothing of phrases in both directions, and phrase penalty .",
"cite_spans": [
{
"start": 148,
"end": 168,
"text": "(Brown et al., 1993;",
"ref_id": "BIBREF2"
},
{
"start": 169,
"end": 187,
"text": "Och and Ney, 2003)",
"ref_id": "BIBREF14"
},
{
"start": 253,
"end": 273,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF6"
},
{
"start": 274,
"end": 285,
"text": "(Och, 2003)",
"ref_id": "BIBREF15"
},
{
"start": 300,
"end": 322,
"text": "(Olteanu et al., 2006)",
"ref_id": "BIBREF16"
},
{
"start": 394,
"end": 409,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF20"
},
{
"start": 463,
"end": 476,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "4.3"
},
{
"text": "The initial word alignments are obtained using the baseline configuration described above. From these, we build two bilingual 1-to-n dictionaries (one for each direction), and the training corpus is updated by repacking the words in the dictionaries, using the method presented in Section 2. As previously mentioned, this process can be repeated several times; at each step, we can also choose to exploit only one of the two available dictionaries, if so desired. We then extract aligned phrases using the same procedure as for the baseline system; the only difference is the basic unit we are considering. Once the phrases are extracted, we perform the estimation of the features of the log-linear model and unpack the grouped words to recover the initial words. Finally, minimum-errorrate training and decoding are performed. The various parameters of the method (k, t cooc , t ac , cf. Section 2) have been optimized on the development set. We found out that it was enough to perform two iterations of repacking: the optimal set of values was found to be k = 3, t ac = 0.5, t cooc = 20 for the first iteration, and t cooc = 10 for the second BLEU[%] Baseline 15.14 n=1. with C-E dict.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1"
},
{
"text": "15.92 n=1. with E-C dict.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1"
},
{
"text": "15.77 n=1. with both 16.59 n=2. with C-E dict.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1"
},
{
"text": "16.99 n=2. with E-C dict.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1"
},
{
"text": "16.59 n=2. with both 16.88 Table 3 : Influence of word repacking on Chinese-to-English MT iteration, for both directions. 9 In Table 3 , we report the results obtained on the test set, where n denotes the iteration. We first considered the inclusion of only the Chinese-English dictionary, then only the English-Chinese dictionary, and then both. After the first step, we can already see an improvement over the baseline when considering one of the two dictionaries. When using both, we observe an increase of 1.45 BLEU points, which corresponds to a 9.6% relative increase. Moreover, we can gain from performing another step. However, the inclusion of the English-Chinese dictionary is harmful in this case, probably because 1-to-n alignments are less frequent for this direction, and have been captured during the first step. By including the Chinese-English dictionary only, we can achieve an increase of 1.85 absolute BLEU points (12.2% relative) over the initial baseline. 10",
"cite_spans": [
{
"start": 122,
"end": 123,
"text": "9",
"ref_id": null
}
],
"ref_spans": [
{
"start": 27,
"end": 34,
"text": "Table 3",
"ref_id": null
},
{
"start": 127,
"end": 134,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1"
},
{
"text": "Quality of the Dictionaries To assess the quality of the extraction procedure, we simply manually evaluated the ratio of incorrect entries in the dictionaries. After one step of word packing, the Chinese-English and the English-Chinese dictionaries respectively contain 7.4% and 13.5% incorrect entries. After two steps of packing, they only contain 5.9% and 10.3% incorrect entries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.1"
},
{
"text": "Intuitively, the word alignments obtained after word packing are more likely to be 1-to-1 than before. In-deed, the word sequences in one language that usually align to one single word in the other language have been grouped together to form one single token. Table 4 shows the detail of the distribution of alignment types after one and two steps of automatic repacking. In particular, we can observe that the 1: 1 1: 0 1: 1 1: 2 1: 3 1: Table 4 : Distribution of alignment types (%)",
"cite_spans": [],
"ref_spans": [
{
"start": 260,
"end": 267,
"text": "Table 4",
"ref_id": null
},
{
"start": 439,
"end": 446,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Alignment Types",
"sec_num": "5.2"
},
{
"text": "n (n > 3) C-E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Types",
"sec_num": "5.2"
},
{
"text": "alignments are more frequent after the application of repacking: the ratio of this type of alignment has increased by 7.81% for Chinese-English and 5.26% for English-Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Types",
"sec_num": "5.2"
},
{
"text": "To test the influence of the initial word segmentation on the process of word packing, we considered an additional segmentation configuration, based on an automatic segmenter combining rule-based and statistical techniques (Zhao et al., 2001 The results obtained are displayed in Table 5 . As expected, the automatic segmenter leads to slightly lower results than the human-corrected segmentation. However, the proposed method seems to be beneficial irrespective of the choice of segmentation. Indeed, we can also observe an improvement in the new setting: 2.6 points absolute increase in BLEU (17.4% relative). 11",
"cite_spans": [
{
"start": 223,
"end": 241,
"text": "(Zhao et al., 2001",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 280,
"end": 287,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Influence of Word Segmentation",
"sec_num": "5.3"
},
{
"text": "We also compared our technique for automatic packing of words with the exploitation of manually developed resources. More specifically, we used a 1-to-n Chinese-English bilingual dictionary, described in Section 3.4, and used it in place of the automatically acquired dictionary. Words are thus grouped according to this dictionary, and we then apply the same word aligner as for previous experiments. In this case, since we are not bootstrapping from the output of a word aligner, this can actually be seen as a pre-processing step prior to alignment. These resources follow more or less the same format as the output of the word segmenter mentioned in Section 5.1.2 (Zhao et al., 2001) , so the experiments are carried out using this segmentation.",
"cite_spans": [
{
"start": 668,
"end": 687,
"text": "(Zhao et al., 2001)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting Manually Developed Resources",
"sec_num": "5.4"
},
{
"text": "14.91 Automatic word packing 17.51 Packing with \"manual\" dictionary 16.15 Table 6 : Exploiting manually developed resources",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 81,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "BLEU[%] Baseline",
"sec_num": null
},
{
"text": "The results obtained are displayed in Table 6 .We can observe that the use of the manually developed dictionary provides us with an improvement in translation quality: 1.24 BLEU points absolute (8.3% relative). However, there does not seem to be a clear gain when compared with the automatic method. Even if those manual resources were extended, we do not believe the improvement is sufficient enough to justify this additional effort.",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 45,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "BLEU[%] Baseline",
"sec_num": null
},
{
"text": "In this paper, we have introduced a simple yet effective method to pack words together in order to give a different and simplified input to automatic word aligners. We use a bootstrap approach in which we first extract 1-to-n word alignments using an existing word aligner, and then estimate the confidence of those alignments to decide whether or not the n words have to be grouped; if so, this group is conwould thus be completely driven by the bilingual alignment process (see also (Wu, 1997; Tiedemann, 2003) for related considerations). In this case, our approach would be similar to the approach of (Xu et al., 2004) , except for the estimation of candidates. sidered a new basic unit to consider. We can finally re-apply the word aligner to the updated sentences.",
"cite_spans": [
{
"start": 485,
"end": 495,
"text": "(Wu, 1997;",
"ref_id": "BIBREF25"
},
{
"start": 496,
"end": 512,
"text": "Tiedemann, 2003)",
"ref_id": "BIBREF22"
},
{
"start": 605,
"end": 622,
"text": "(Xu et al., 2004)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "We have evaluated the performance of our approach by measuring the influence of this process on a Chinese-to-English MT task, based on the IWSLT 2006 evaluation campaign. We report a 12.2% relative increase in BLEU score over a standard phrase-based SMT system. We have verified that this process actually reduces the number of 1: n alignments with n = 1, and that it is rather independent from the (Chinese) segmentation strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "As for future work, we first plan to consider different confidence measures for the filtering of the alignment candidates. We also want to bootstrap on different word aligners; in particular, one possibility is to use the flexible HMM word-to-phrase model of Deng and Byrne (2005) in place of IBM model 4. Finally, we would like to apply this method to other corpora and language pairs.",
"cite_spans": [
{
"start": 259,
"end": 280,
"text": "Deng and Byrne (2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Note that a 1: 0 alignment may denote a failure to capture a 1: n alignment with n > 1.4 Interestingly, this is actually even the case for approaches that directly model alignments between phrases(Marcu and Wong, 2002;Birch et al., 2006).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Consequently, if we compare our approach to the problem of collocation identification, we may say that we are more interested in precision than recall(Smadja et al., 1996). However, note that our goal is not recognizing specific sequences of words such as compounds or collocations; it is making (bilingually motivated) changes that simplify the alignment process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In case of overlap between several groups of words to replace, we select the one with highest confidence (according to tac).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "More specifically, we choose the first English reference from the 7 references and the Chinese sentence to construct new sentence pairs.8 Training is performed using the same number of iterations as in Section 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The parameters k, tac, and tcooc are optimized for each step, and the alignment obtained using the best set of parameters for a given step are used as input for the following step.10 Note that this setting (using both dictionaries for the first step and only the Chinese dictionary for the second step) is also the best setting on the development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We could actually consider an extreme case, which would consist of splitting the sentences into characters, i.e. each character would be blindly treated as one word. The segmentation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported by Science Foundation Ireland (grant number OS/IN/1732). Prof. Tiejun Zhao and Dr. Muyun Yang from the MT group of Harbin Institute of Technology, and Yajuan Lv from the Institute of Computing Technology, Chinese Academy of Sciences, are kindly acknowledged for providing us with the Chinese segmenter and the manually developed bilingual dictionary used in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Going beyond aer: An extensive analysis of word alignments and their impact on mt",
"authors": [
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Necip Fazil Ayan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dorr",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of COLING-ACL 2006",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Necip Fazil Ayan and Bonnie J. Dorr. 2006. Going be- yond aer: An extensive analysis of word alignments and their impact on mt. In Proceedings of COLING- ACL 2006, pages 9-16, Sydney, Australia.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Constraining the phrase-based, joint probability statistical translation model",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of AMTA 2006",
"volume": "",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandra Birch, Chris Callison-Burch, and Miles Os- borne. 2006. Constraining the phrase-based, joint probability statistical translation model. In Proceed- ings of AMTA 2006, pages 10-18, Boston, MA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estima- tion. Computational Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "HMM word and phrase alignment for statistical machine translation",
"authors": [
{
"first": "Yonggang",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of HLT-EMNLP 2005",
"volume": "",
"issue": "",
"pages": "169--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonggang Deng and William Byrne. 2005. HMM word and phrase alignment for statistical machine transla- tion. In Proceedings of HLT-EMNLP 2005, pages 169-176, Vancouver, Canada.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Measuring word alignment quality for statistical machine translation",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Fraser and Daniel Marcu. 2006. Measuring word alignment quality for statistical machine transla- tion. Technical Report ISI-TR-616, ISI/University of Southern California.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic extraction of word sequence correspondences in parallel corpora",
"authors": [
{
"first": "Mihoko",
"middle": [],
"last": "Kitamura",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 4th Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "79--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihoko Kitamura and Yuji Matsumoto. 1996. Auto- matic extraction of word sequence correspondences in parallel corpora. In Proceedings of the 4th Workshop on Very Large Corpora, pages 79-87, Copenhagen, Denmark.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "48--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Koehn, Franz Och, and Daniel Marcu. 2003. Sta- tistical phrase-based translation. In Proceedings of HLT-NAACL 2003, pages 48-54, Edmonton, Canada.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Pharaoh: A beam search decoder for phrase-based statistical machine translation models",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of AMTA 2004",
"volume": "",
"issue": "",
"pages": "115--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Koehn. 2004. Pharaoh: A beam search decoder for phrase-based statistical machine translation mod- els. In Proceedings of AMTA 2004, pages 115-124, Washington, District of Columbia.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Europarl: A parallel corpus for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "Machine Translation Summit X",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Machine Transla- tion Summit X, pages 79-86, Phuket, Thailand.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Alignment by agreement",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of HLT-NAACL 2006",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Ben Taskar, and Dan Klein. 2006. Align- ment by agreement. In Proceedings of HLT-NAACL 2006, pages 104-111, New York, NY.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Word-based alignment, phrase-based translation: What's the link?",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of AMTA 2006",
"volume": "",
"issue": "",
"pages": "90--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Lopez and Philip Resnik. 2006. Word-based alignment, phrase-based translation: What's the link? In Proceedings of AMTA 2006, pages 90-99, Cam- bridge, MA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A phrase-based, joint probability model for statistical machine translation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP 2002",
"volume": "",
"issue": "",
"pages": "133--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Marcu and William Wong. 2002. A phrase-based, joint probability model for statistical machine transla- tion. In Proceedings of EMNLP 2002, pages 133-139, Morristown, NJ.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automatic discovery of noncompositional compounds in parallel data",
"authors": [
{
"first": "I",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of EMNLP 1997",
"volume": "",
"issue": "",
"pages": "97--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Dan Melamed. 1997. Automatic discovery of non- compositional compounds in parallel data. In Pro- ceedings of EMNLP 1997, pages 97-108, Somerset, New Jersey.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Models of translational equivalence among words",
"authors": [
{
"first": "I",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "2",
"pages": "221--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Dan Melamed. 2000. Models of translational equiv- alence among words. Computational Linguistics, 26(2):221-249.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Och and Hermann Ney. 2003. A systematic com- parison of various statistical alignment models. Com- putational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL 2003",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Och. 2003. Minimum error rate training in statisti- cal machine translation. In Proceedings of ACL 2003, pages 160-167, Sapporo, Japan.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Phramer -an open source statistical phrase-based translator",
"authors": [
{
"first": "Marian",
"middle": [],
"last": "Olteanu",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Ionut",
"middle": [],
"last": "Volosen",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Moldovan",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the NAACL 2006 Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "146--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marian Olteanu, Chris Davis, Ionut Volosen, and Dan Moldovan. 2006. Phramer -an open source statis- tical phrase-based translator. In Proceedings of the NAACL 2006 Workshop on Statistical Machine Trans- lation, pages 146-149, New York, NY.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL 2002",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In Proceedings of ACL 2002, pages 311-318, Philadelphia, PA.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Overview of the IWSLT 2006 Evaluation Campaign",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Paul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of IWSLT 2006",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Paul. 2006. Overview of the IWSLT 2006 Eval- uation Campaign. In Proceedings of IWSLT 2006, pages 1-15, Kyoto, Japan.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Translating collocations for bilingual lexicons: A statistical approach",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Smadja",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
},
{
"first": "Vasileios",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Smadja, Kathleen R. McKeown, and Vasileios Hatzivassiloglou. 1996. Translating collocations for bilingual lexicons: A statistical approach. Computa- tional Linguistics, 22(1):1-38.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "SRILM -An extensible language modeling toolkit",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "901--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Stolcke. 2002. SRILM -An extensible lan- guage modeling toolkit. In Proceedings of the Inter- national Conference on Spoken Language Processing, pages 901-904, Denver, Colorado.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sugaya",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of LREC 2002",
"volume": "",
"issue": "",
"pages": "147--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Takezawa, E. Sumita, F. Sugaya, H. Yamamoto, and S. Yamamoto. 2002. Toward a broad-coverage bilin- gual corpus for speech translation of travel conversa- tions in the real world. In Proceedings of LREC 2002, pages 147-152, Las Palmas, Spain.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Combining clues for word alignment",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of EACL 2003",
"volume": "",
"issue": "",
"pages": "339--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2003. Combining clues for word align- ment. In Proceedings of EACL 2003, pages 339-346, Budapest, Hungary.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "AER: Do we need to \"improve\" our alignments?",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vilar",
"suffix": ""
},
{
"first": "Maja",
"middle": [],
"last": "Popovic",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of IWSLT 2006",
"volume": "",
"issue": "",
"pages": "205--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Vilar, Maja Popovic, and Hermann Ney. 2006. AER: Do we need to \"improve\" our alignments? In Proceedings of IWSLT 2006, pages 205-212, Kyoto, Japan.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "HMM-based word alignment in statistical translation",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of COLING 1996",
"volume": "",
"issue": "",
"pages": "836--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical trans- lation. In Proceedings of COLING 1996, pages 836- 841, Copenhagen, Denmark.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-403.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Do we need chinese word segmentation for statistical machine translation?",
"authors": [
{
"first": "Jia",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Third SIGHAN Workshop on Chinese Language Learning",
"volume": "",
"issue": "",
"pages": "122--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jia Xu, Richard Zens, and Hermann Ney. 2004. Do we need chinese word segmentation for statistical machine translation? In Proceedings of the Third SIGHAN Workshop on Chinese Language Learning, pages 122-128, Barcelona, Spain.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Improvements in phrase-based statistical machine translation",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of HLT-NAACL 2004",
"volume": "",
"issue": "",
"pages": "257--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Zens and Hermann Ney. 2004. Improvements in phrase-based statistical machine translation. In Proceedings of HLT-NAACL 2004, pages 257-264, Boston, MA.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Increasing accuracy of chinese segmentation with strategy of multi-step processing",
"authors": [
{
"first": "Tiejun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yajuan",
"middle": [],
"last": "L\u00fc",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Chinese Information Processing",
"volume": "15",
"issue": "1",
"pages": "13--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tiejun Zhao, Yajuan L\u00fc, and Hao Yu. 2001. Increas- ing accuracy of chinese segmentation with strategy of multi-step processing. Journal of Chinese Information Processing, 15(1):13-18.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"text": "Chinese-English corpus statistics",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF5": {
"text": "Influence of Chinese segmentation",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}