ACL-OCL / Base_JSON /prefixP /json /P03 /P03-1041.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P03-1041",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:14:15.671785Z"
},
"title": "Effective Phrase Translation Extraction from Alignment Models",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Venugopal",
"suffix": "",
"affiliation": {},
"email": "ashishv@cs.cmu.edu"
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": "",
"affiliation": {},
"email": "vogel@cs.cmu.edu"
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Phrase level translation models are effective in improving translation quality by addressing the problem of local reordering across language boundaries. Methods that attempt to fundamentally modify the traditional IBM translation model to incorporate phrases typically do so at a prohibitive computational cost. We present a technique that begins with improved IBM models to create phrase level knowledge sources that effectively represent local as well as global phrasal context. Our method is robust to noisy alignments at both the sentence and corpus level, delivering high quality phrase level translation pairs that contribute to significant improvements in translation quality (as measured by the BLEU metric) over word based lexica as well as a competing alignment based method.",
"pdf_parse": {
"paper_id": "P03-1041",
"_pdf_hash": "",
"abstract": [
{
"text": "Phrase level translation models are effective in improving translation quality by addressing the problem of local reordering across language boundaries. Methods that attempt to fundamentally modify the traditional IBM translation model to incorporate phrases typically do so at a prohibitive computational cost. We present a technique that begins with improved IBM models to create phrase level knowledge sources that effectively represent local as well as global phrasal context. Our method is robust to noisy alignments at both the sentence and corpus level, delivering high quality phrase level translation pairs that contribute to significant improvements in translation quality (as measured by the BLEU metric) over word based lexica as well as a competing alignment based method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Statistical Machine Translation defines the task of translating a source language sentence where the search component is commonly referred to as the decoding step (Wang and Waibel, 1998) . Within the generative model, the Bayes reformulation is used to estimate , at the cost of deviating from the Bayesian framework. Regardless of the approach, the question of accurately estimating a model of translation from a large parallel or comparable corpus is one of the defining components within statistical machine translation.",
"cite_spans": [
{
"start": 163,
"end": 186,
"text": "(Wang and Waibel, 1998)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Re-ordering effects across languages have been modeled in several ways, including word-based (Brown et al., 1993) , template-based and syntax-based (Yamada, Knight, 2001) . Analyzing these models from a generative mindset, they all assume that the atomic unit of lexical content is the word, and re-ordering effects are applied above that level. (Marcu, Wong, 2002) illustrate the effects of assuming that lexical correspondence can only be modeled at the word level, and motivate a joint probability model that explicitly generates phrase level lexical content across both languages. (Wu, 1995) presents a bracketing method that models re-ordering at the sentence level. Both (Marcu, Wong, 2002; Wu, 1995) model the reordering phenomenon effectively, but at significant computational expense, and tend to be difficult to scale to long sentences. Reasons to introduce phrase level translation knowledge sources have been ade-quately shown and confirmed by (Och, Ney, 2000) , and we focus on methods to build these sources from existing, mature components within the translation process.",
"cite_spans": [
{
"start": 93,
"end": 113,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF0"
},
{
"start": 148,
"end": 170,
"text": "(Yamada, Knight, 2001)",
"ref_id": "BIBREF11"
},
{
"start": 346,
"end": 365,
"text": "(Marcu, Wong, 2002)",
"ref_id": "BIBREF2"
},
{
"start": 585,
"end": 595,
"text": "(Wu, 1995)",
"ref_id": "BIBREF10"
},
{
"start": 677,
"end": 696,
"text": "(Marcu, Wong, 2002;",
"ref_id": "BIBREF2"
},
{
"start": 697,
"end": 706,
"text": "Wu, 1995)",
"ref_id": "BIBREF10"
},
{
"start": 956,
"end": 972,
"text": "(Och, Ney, 2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper presents a method of phrase extraction from alignment data generated by IBM Models. By working directly from alignment data with appropriate measures taken to extract accurate translation pairs, we try to avoid the computational complexity that can result from methods that try to create globally consistent alignment model phrase segmentations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first describe the information available within alignment data, and go on to describe a method for extracting high quality phrase translation pairs from such data. We then discuss the implications of adding phrasal translation pairs to the decoding process, and present evaluation results that show significant improvements when applying the described extraction technique. We end with a discussion of strengths and weaknesses of this method and the potential for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Alignment models associate words and their translations at the sentence level creating a translation lexicon across the language pair. For each sentence pair, the model also presents the maximally likely association between each source and target word across the sentence pair, forming an alignment map for each sentence pair in the training corpus. The most likely alignment pattern between a source and target sentence under the trained alignment model will be referred to as the maximum approximation, which under HMM alignment (Vogel et al., 1996) model corresponds to the Viterbi path. A set of words in the source sentence associated with a set of words in the target sentence is considered a phrasal pair and forms a partition within the alignment map. . shows a source and target sentence pair with points indicating alignment points.",
"cite_spans": [
{
"start": 531,
"end": 551,
"text": "(Vogel et al., 1996)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "A phrasal translation pair within a sentence pair can be represented as the 4-tuple hypothesis",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "A C B D 9 E 9 F G 5 E I H \u00a6 E 9 F Q P R representing an index D 9 E I H $ and length S F T G 5 E 9 F Q P U",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "within the source and the target sentence pair 1 , respectively. The phrasal extraction task involves selecting phrasal hypotheses based on the alignment model (both the translation lexicon as well as the maximal approximation). The maximal approximation captures context at the sentence level, while the lexicon provides a corpus level translation estimate, motivating the alignment model as a starting point for phrasal extraction. The extraction technique must be able to handle alignments that are only partially correct, as well as cases where the sentence pairs have been incorrectly matched as parallel translations within the corpus. Accommodating for the noisy corpus is an increasingly important component of the translation process, especially when considering languages where no manually aligned parallel corpus is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Building a phrasal lexicon involves Generation, Scoring, and Pruning steps, corresponding to generating a set of candidate translation pairs, scoring them based on the translation model, and pruning them to account for noise within the data as well as the extraction process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "The generation step refers to the process of identifying source phrases that require translations and then extracting translations from the alignment model data. We begin by identifying all source language ngrams upto some`within the training corpus. When the test sentences that require translation are known, we can simply extract those n-grams that appear in the test sentences. For each of these n-grams, we create a set of candidate translations extracted from the corpus. The primary motivation to restrict the identification step to the test sentence n-grams is savings in computational expense, and the result is a phrasal translation source that extracts translation pairs limited to the test sentences. For each source language n-gram within the pool, we have to find a set of candidate translations. The generation task is formally defined as finding",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "! A b a in Equation (1) ! A a \u00a1 \u00a6 c d X e A C B D 9 E 9 F T G E I H f E 9 F g P h p i ! A E 1 q W \u00a7 \u00a9 \u00a7 \u00a9 \u00a7 1 W Q r s u t \u00a3 %",
"eq_num": "(1)"
}
],
"section": "Generation",
"sec_num": "3"
},
{
"text": "where % is the source n-gram for which we are extracting translations, ! A is the set of all partitions, and . We extract these candidates from the alignment map by examining each sentence pair where the source n-gram occurs, and extracting all possible target phrase translations using a sliding window approach. We extract candidate translations of phrase length",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "3"
},
{
"text": "1 v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "3"
},
{
"text": "@ to x , starting at offset y to x @",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "3"
},
{
"text": ". Figure 1 . shows circular boxes indicating each potential partition region. One particular partition is indicated by the shading.",
"cite_spans": [],
"ref_spans": [
{
"start": 2,
"end": 10,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generation",
"sec_num": "3"
},
{
"text": "Over all occurrences of the n-gram within the sentences as well as across sentences, a sizeable candidate pool is generated that attempts the cover the translated usage of the source n-gram % within the corpus. This set is large, and contains several spurious translations, and does not consider other source side n-grams within each sentence. The deliberate choice to avoid creating a consistent partitioning of the sentence pairs across n-grams reflects the ability to model partially correct alignments within sentences. This sliding window can be restricted to exclude word-word translations, ie",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "3"
},
{
"text": "F T G \u00a3 @ , F Q P \u00a3 @",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "3"
},
{
"text": "if other sources are available that are known to be more accurate. Now that the candidate pool has been generated, it needs to be scored and pruned to reflect relative confidence between candidate translations and to remove spurious translations due to the sliding window approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "3"
},
{
"text": "The candidate translations for the source n-gram now need to be scored and ranked according to some measure of confidence. Each candidate translation pair defines a partition within the sentence map, and this partitioning can be scored for confidence in translation quality. We estimate translation confidence by measures from three models; the estimation from the maximum approximation (alignment map), estimation from the word based translation lexicon, and language specific measures. Each of the scoring methods discussed below contributes to the final score under 2D \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring",
"sec_num": "4"
},
{
"text": "F V & # \u00a6 w i ! A a \u00a3 W V # \u00a6 5 W w i ! A a U q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring",
"sec_num": "4"
},
{
"text": "(2) where W e d W = @ and w refers to a translation hypothesis for a given source n-gram",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring",
"sec_num": "4"
},
{
"text": ". From now on we will refer to a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "%",
"sec_num": null
},
{
"text": "V & # \u00a6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "%",
"sec_num": null
},
{
"text": "with regard to a particular % implicitly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "%",
"sec_num": null
},
{
"text": "We define two kinds of scores, within sentence consistency and across sentence consistency from the alignment map, in order to represent local and global context effects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Map",
"sec_num": "4.1"
},
{
"text": "The partition defined by each candidate translation pair imposes constraints over the maximum approximation hypothesis for sentences in which it occurs. We evaluate the partition by examining its consistency with the maximum approximation hypothesis by considering the alignment hypothesis points within the sentence. An alignment point",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Within Sentence",
"sec_num": "4.2"
},
{
"text": "f B 0 E g h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Within Sentence",
"sec_num": "4.2"
},
{
"text": "(source, target) is said to be consistent if it occurs within the partition defined by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Within Sentence",
"sec_num": "4.2"
},
{
"text": "A B D 9 E 9 F G E I H \u00a6 E 9 F P . f j i 4 kl",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Within Sentence",
"sec_num": "4.2"
},
{
"text": "is considered inconsistent in two cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Within Sentence",
"sec_num": "4.2"
},
{
"text": "D m 0 m D X n o F T G and v g q p p H or g s r p H t n o F Q P I (3) H u m g q m v H w n 2 F g P and 0 p x D or 0 r o D X n 2 F G (4) Each A C B D 9 E 9 F T G E I H f E 9 F g P R in ! A a 1 ( D s \u00a7 \u00a9 \u00a7 \u00a9 \u00a7 y D + F G defines %",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Within Sentence",
"sec_num": "4.2"
},
{
"text": ") determines a set of consistent and inconsistent points. Figure 1 . shows inconsistent points with respect to the shaded partition by drawing an X over the alignment point. The within sentence consistency scoring metric is defined in Equation 5.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 66,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Within Sentence",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "V & # \u00a6 G A C B D 9 E 9 F G 5 E I H \u00a6 E 9 F Q P R e \u00a3 z & \u00a1 z D & \u00a1 y n z \u00a1",
"eq_num": "(5)"
}
],
"section": "Within Sentence",
"sec_num": "4.2"
},
{
"text": "This measure represents consistency of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Within Sentence",
"sec_num": "4.2"
},
{
"text": "A C B D 9 E 9 F G 5 E I H \u00a6 E 9 F Q P R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Within Sentence",
"sec_num": "4.2"
},
{
"text": "within the maximal approximation alignment for sentence pair 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Within Sentence",
"sec_num": "4.2"
},
{
"text": "Several hypothesis within ! A a 1 are similar or identical to those in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Across Sentence",
"sec_num": "4.3"
},
{
"text": "! A a S { \u00a6 where 1 \u00a3 | {",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Across Sentence",
"sec_num": "4.3"
},
{
"text": ". We want to score hypothesis that are consistent across sentences higher than those that occur rarely, as the former are assumed to be the correct translations in context. We want to account for different contexts across sentences; therefore we want to highlight similar translations, not simply exact matches. We use a word level Levenstein distance to compare the target side hypotheses within",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Across Sentence",
"sec_num": "4.3"
},
{
"text": "! A a . Each element w within ! A a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Across Sentence",
"sec_num": "4.3"
},
{
"text": "(the complete candidate translation list for % ) is assigned the average Levenstein distance with all other elements as its across sentence consistence score; effectively performing a single pass average link clustering to identify the correct translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Across Sentence",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "V & # \u00a6 5 } G w \u00a3 @ \u00a5 4 4 e w E q w",
"eq_num": "(6)"
}
],
"section": "Across Sentence",
"sec_num": "4.3"
},
{
"text": "where e calculates the Levenshein distance between the target phrases within two hypothesis w and w , is the number of elements in ! A a .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Across Sentence",
"sec_num": "4.3"
},
{
"text": "V & # f } G",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The higher the",
"sec_num": null
},
{
"text": ", the more likely the hypothesis pair is a correct translation. The clustering approach accounts for noise due to incorrect sentence alignment, as well as the different contexts in which a particular source n-gram can be used. As predicted by the formulation of this method, preference is given towards shorter target translations. This effect can be countered by introducing a phrase length model to approximate the difference in phrases lengths across the language boundary. This will be discussed further as a language specific scoring method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The higher the",
"sec_num": null
},
{
"text": "The methods presented above used the maximum approximation to score candidate translation hypotheses. The translation lexicon generated by the IBM models provides translation estimates at the word level built on the complete training corpus. These corpus level estimates can be integrated into our scoring paradigm to balance the sentence level estimates from the alignment map methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alignment Lexicon",
"sec_num": "4.4"
},
{
"text": "1 \u00a2 \u00a1 i 3 l for each f B 0 E g h ( \u00a1 i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The translation lexicon provides a conditional probability estimate",
"sec_num": null
},
{
"text": "refers to the word at position 0 in sentence 1 ) within the maximum approximation. Depending on the direction in which the traditional IBM models are trained, we can either condition on the source or target side, while joint probability models can give us a bidirectional estimate. These translation probability estimates are used to weight the . So far we have only considered the points within the partition where alignment points are predicted by the maximal approximation. The translation lexicon provides estimates at the word level, so we can construct a scoring measure for the complete region within",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The translation lexicon provides a conditional probability estimate",
"sec_num": null
},
{
"text": "A C B D 9 E 9 F G 5 E I H \u00a6 E 9 F Q P R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The translation lexicon provides a conditional probability estimate",
"sec_num": null
},
{
"text": "that models the complete probability of the partition. The lexical scoring equation below models this effect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The translation lexicon provides a conditional probability estimate",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "V & # \u00a6 s i A C B D 9 E 9 F G 5 E I H \u00a6 E 9 F Q P R e \u00a3 W i v s u t l v s 1 \u00a2 \u00a1 i 3 l",
"eq_num": "(7)"
}
],
"section": "The translation lexicon provides a conditional probability estimate",
"sec_num": null
},
{
"text": "This method prefers longer target side phrases due to the sum over the target words within the partition. Although it would also prefer short source side phrases, we are only concerned with comparing hypothesis partitions for a given source n-gram % .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The translation lexicon provides a conditional probability estimate",
"sec_num": null
},
{
"text": "The nature of the phrasal association between languages varies depending on the level of inflexion, morphology as well as other factors. The predominant language specific correction to the scoring techniques discussed above models differences in phrase lengths across languages. For example, when comparing English and Chinese translations, we see that on average, the English sentence is approximately 1.3 times longer (under our current segmentation in the small data track). To model these language specific effects, we introduce a phrase length scoring component that is based on the ratio of sentence length between languages. We build a sentence length model based on the DiffRatio statistic defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Specific",
"sec_num": "4.5"
},
{
"text": "D U 7 \" R D \u00a3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Specific",
"sec_num": "4.5"
},
{
"text": "where I is the source sentence length and J is the target sentence length. Let be the average D U 7 \" R D over the sentences in the corpus, and 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Specific",
"sec_num": "4.5"
},
{
"text": "be the variance; thereby defining a normal distribution over the DiffRatio statistic. Using the standard Z normalization technique under a normal distribution parameterized by E , we can estimate the probability that a new DiffRatio calculated on the phrasal pair can be generated by the model, giving us the scoring estimate below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Specific",
"sec_num": "4.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "V & # f s U A C B D E 9 F T G E I H f E 9 F Q P h \u00a3 \u00a1 S F G \u00a9 E 9 F Q P 3 \u00a2 E \u00a3",
"eq_num": "(8)"
}
],
"section": "Language Specific",
"sec_num": "4.5"
},
{
"text": "To improve the model we might consider examining known phrase translation pairs if this data is available. We explore the language specific difference further by noting that English phrases contain several function words that typically align to the empty Chinese word. We accounted for this effect within the scoring process by treating all target language (English) phrases that only differed by the function words on the phrase boundary as the same translation. The burden of selecting the appropriate hypothesis within the decoding process is moved towards the language model under this corrective strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Specific",
"sec_num": "4.5"
},
{
"text": "The list of candidate translations for each source ngram % is large, and must be pruned to select the most likely set of translations. This pruning is required to ensure that the decoding process remains computationally tractable. Simple threshold methods that rank hypotheses by their final score and only save the top hypotheses will not work here, since phrases differ in the number of possible correct translations they could have when used in different contexts. Given the score ordered set of candidate phrases ! A a , we would like to label some subset as incorrect translations and remove them from the set. We approach this task as a density estimation problem where we need to separate the distribution of the incorrectly translated hypothesis from the distribution of the likely translations. Instead of using the maximum likelihood criteria, we use the maximal separation criteria ie. selecting a splitting point within the scores to maximize the difference of the mean score between distributions as shown below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 \u00ef w 3 \u00a1 w \u00a9 % \u00a3 D \" F V & # f w & \u00ab \u00aa \u00ac D \" F V & # f w",
"eq_num": "(10)"
}
],
"section": "Pruning",
"sec_num": "5"
},
{
"text": "(10) calculates direct translation probabilities, ie",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning",
"sec_num": "5"
},
{
"text": "1 5 3 \u00a1 &",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning",
"sec_num": "5"
},
{
"text": ". As mentioned earlier, (Och and Ney, 2002) , show that using direction translation estimates in the decoding process as compared with calculating 1 \u00a2 \u00a1 8 3",
"cite_spans": [
{
"start": 24,
"end": 43,
"text": "(Och and Ney, 2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning",
"sec_num": "5"
},
{
"text": "as prescribed by the Bayesian framework does not reduce translation quality. Our results corroborate these findings and we use (10) as the phrase level translation model estimate within our decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pruning",
"sec_num": "5"
},
{
"text": "Phrase translation pairs that are generated by the method described in this paper are finally scored with estimates of translation probability, which can be conditioned on the target language if necessary. These estimates fit cleanly into the decoding process, except for the issue of phrase length. Traditional word lexicons propose translations for one source word, while with phrase translations, a single hypothesis pair can span several words in the source or target language. Comparing between a path that uses a phrase compared to one that uses multiple words (even if the constituent words are the same) is difficult. The word level pathway involves the product of several probabilities, whereas the phrasal path is represented by one probability score. Potential solutions are to introduce translation length models or to learn scaling factors for phrases of different lengths. Results in this paper have been generated by empirically determining a scaling factor that was inversely proportional to the lenth of the phrase, causing each translation to have a score comparable to the product of the word to word translations within the phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integration",
"sec_num": "6"
},
{
"text": "In order to compare our method to a well understood phrase baseline, we present a method that ex- (Vogel et al., 1996) . The HMM alignment model is computationally feasible even for very long sentences, and the phrase extraction method does not have limits on the length of extracted target side phrase. For each source phrase ranging from positions",
"cite_spans": [
{
"start": 98,
"end": 118,
"text": "(Vogel et al., 1996)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Phrase Extraction",
"sec_num": "7"
},
{
"text": "\" $ ' \" D # \u00a1 w D \u00a1 \u00ae C % F T D R \u00a1 w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Phrase Extraction",
"sec_num": "7"
},
{
"text": "D \u00a5 to D",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Phrase Extraction",
"sec_num": "7"
},
{
"text": "the target phrase is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Phrase Extraction",
"sec_num": "7"
},
{
"text": "H 5 W \u00b0 \u00a3 ' D 7 W \u00a2 H \u00b1 \u00a3 \" D R \u00a3 and H \u00a9 } i \u00a3 ' ) \" $ 0 v W \u00a2 H \u00a3 \" D R \u00a3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Phrase Extraction",
"sec_num": "7"
},
{
"text": ", where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Phrase Extraction",
"sec_num": "7"
},
{
"text": "D \u00b2 \u00a3 \u00b3 D \u00a5 5 c \u0107 \u0107 d D",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Phrase Extraction",
"sec_num": "7"
},
{
"text": "and H refers to an index in the target sentence pair. We calculate phrase translation probabilities (the scores for each extracted phrase) based on a statistical lexicon for the constituent words in the phrase. As the IBM1 alignment model gives the global optimum for the lexical probabilities, this is the natural choice. This leads to the phrase translation probability are estimated using the IBM1 word alignment model. The phrases extracted from this method can be used directly within our in-house decoder without the significant changes that other phrase based methods could require.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMM Phrase Extraction",
"sec_num": "7"
},
{
"text": "IBM alignment models were trained up to model 4 using GIZA (Al Onaizan et al., 1999) from Chinese to English and Chinese to English on two tracks of data. Figures describing the characteristics of each track as well as the test sentences are shown in Table ( 1). All the data were extracted from a newswire source. We applied our in house segmentation toolkit on the Chinese data and performed basic preprocessing which included; lowercasing, tagging dates, times and numbers on both languages. Translation quality is evaluated by two metrics, (MTEval, 2002) and BLEU (Papeneni et al., 2001) , both of which measure n-gram matches between the translated text and the reference translations. NIST is more sensitive to unigram precision due to its emphasis toward high perplexity words. Four reference translations were available for each test sentence. We first compare against a system built using word level lexica only to reiterate the impact of phrase translation, and then show gains by our method over a system that utilizes phrase extracted from the HMM method. The word level system consisted of a hand crafted (Linguistics Data Consortium) bilingual dictionary and a statistical lexicon derived from training IBM model 1. In our experiments we found that although training higher order IBM models does yield lower alignment error rates when measured against manually aligned sentences, the highest translation quality is achieved by using a lexicon extracted from the Model 1 alignment. Experiments were run with a language model (LM) built on a 20 million word news source corpus using our in house decoder which performs a monotone decoding without reordering. To implement our phrase extraction technique, the maximum approximation alignments were combined with the union operation as described in , resulting in a dense but inaccurate alignment map as measured against a human aligned gold standard. Since bi-directional translation models are available, scoring was performed in both directions, using IBM Model 1 lexica for the within sentence scoring. The final phrase level scores computed in each direction were combined by a weighted average before the pruning step. Source side phrases were restricted to be of length 2 or higher since word lexica were available. Weights for each scoring metric were determined empirically against a validation set (alignment map scores were assigned the highest weighting). Table ( 2) shows results on the small data track, while Table ( 3) shows results on the large data track. The technique described in this paper is labelled w # f \" \u00a1 \u00a1 in the tables. The results show that the phrase extraction method described in this paper contribute to statistically significant improvements over the baseline word and phrase level(HMM) systems. When compared against the HMM phrases, our technique show statistically significant improvements. Statistical significance is evaluated by con- Table 3 : Large track results sidering deviations in sentence level NIST scores over the 993 sentence test set with a NIST improvement of 0.05 being statistically significant at the 0.01 alpha level. In combination with the HMM method, our technique delivers further gains, providing evidence that different kinds of phrases have been learnt by each method. The improvements caused by our methods is more apparent in the NIST score rather than the BLEU score. We predict that this effect is due to the language specific correction that treats target phrases with function words at the boundaries as the same phrase. This correction cause the burden to be placed on the language model to select the correct phrase instance from several possible translations. Correctly translating function words dramatically boosts the NIST measure as it places emphasis on high perplexity words ie. those with diverse contexts.",
"cite_spans": [
{
"start": 59,
"end": 84,
"text": "(Al Onaizan et al., 1999)",
"ref_id": null
},
{
"start": 544,
"end": 558,
"text": "(MTEval, 2002)",
"ref_id": null
},
{
"start": 568,
"end": 591,
"text": "(Papeneni et al., 2001)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 251,
"end": 258,
"text": "Table (",
"ref_id": null
},
{
"start": 2428,
"end": 2435,
"text": "Table (",
"ref_id": null
},
{
"start": 2484,
"end": 2491,
"text": "Table (",
"ref_id": null
},
{
"start": 2937,
"end": 2944,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimentation",
"sec_num": "8"
},
{
"text": "\u2022 w h 4 \u00b9 \u00ae b \u00ba \u00bb x V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimentation",
"sec_num": "8"
},
{
"text": "We have presented a method to efficiently extract phrase relationships from IBM word alignment models by leveraging the maximum approximation as well as the word lexicon. Our method is significantly less computationally expensive than methods that attempt to explicitly model phrase level interactions within alignment models, and recovers well from noisy alignments at the sentence and corpus level. The significant improvements above the baseline carry through when this method is combined with other phrasal and word level methods. Further experimentation is required to fully appreciate the robustness of this technique, especially when considering a comparable, but not parallel, corpus. The language specific scoring methods have a significant impact on translation quality, and further work to extend these methods to represent specific characteristics of each language, promises to deliver further improvements. Although the method performs well, it lacks an explanatory framework through the extraction process; instead it leverages the well understood fundamentals of the traditional IBM models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "Combining phrase level knowledge sources within a decoder in an effective manner is currently our primary research interest, specifically integrating knowledge sources of varying reliability. Our method has shown to be an effective contributing component within the translation framework and we expect to continue to improve the state of the art within machine translation by improving phrasal extraction and integration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
}
],
"back_matter": [
{
"text": "where 4 \u00a5 Bis the mean score of those hypothesis with a score less than 1 , and 4 \u00a7 Bis the mean score of those hypothesis with a greater than or equal to 1 . Once pruning is completed, we convert the scores into a probability measure conditioned on the source n-gram % and assign the probability estimate as the translation probability for the hypothesis w as shown below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Mathematics of Statistical Machine Translation: Parameter Estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "19",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer 1993. The Mathematics of Statistical Machine Translation: Parameter Estima- tion, Computational Linguisics vol 19(2) 1993",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Maximum Entropy Minimum Divergence Translation Model",
"authors": [
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of the 38th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Foster 2000. A Maximum Entropy Minimum Di- vergence Translation Model, Proc. of the 38th Annual Meeting of the Association for Computational Lin- guistics",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Phrase-Based, Joint Probability Model for Statistical Machine Translation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the Conference on Empirical Methods in Natural Language Processing",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Marcu and William Wong 2002. A Phrase-Based, Joint Probability Model for Statistical Machine Trans- lation, Proc. of the Conference on Empirical Methods in Natural Language Processing , Philadelphia, PA NIST 2002. MT Evaluation Kit Version 9, www.nist.gov/speech/tests/mt/",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Discriminative Training and Maximum Entropy Models for Statistical Machine Translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. North American Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och, Hermann Ney 2002. Discriminative Training and Maximum Entropy Models for Statistical Machine Translation, Proc. North American Associa- tion for Computational Linguistics",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Comparison of Alignment Models for Statistical Machine Translation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of the 18th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney 200. A Comparison of Alignment Models for Statistical Machine Transla- tion, Proc. of the 18th International Conference on Computational Linguistics. Saarbrucken, Germany",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Improved Alignment Models for Statistical Machine Translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of the Joint Conference of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "20--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och, Christoph Tillmann, Hermann Ney 1999. Improved Alignment Models for Statistical Ma- chine Translation, Proc. of the Joint Conference of Empirical Methods in Natural Language Processing, p20-28, MD.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BLEU: A Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papeneni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
}
],
"year": 2001,
"venue": "IBM Research Report",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papeneni, Salim Roukos, Todd Ward 2001. BLEU: A Method for Automatic Evaluation of Ma- chine Translation, IBM Research Report, RC22176",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "HMM-based Word Alignment in Statistical Translation",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of COLING '96: The 16th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "836--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Vogel, Hermann Ney, and Christoph Tillmann 1996. HMM-based Word Alignment in Statistical Translation, Proc. of COLING '96: The 16th Interna- tional Conference on Computational Linguistics, pp. 836-841. Copenhagen, Denmark",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Fast Decoding for Statistical Machine Translation",
"authors": [
{
"first": "Yeyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of the International Conference in Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yeyi Wang, Alex Waibel 1998. Fast Decoding for Statis- tical Machine Translation, Proc. of the International Conference in Spoken Language Processing",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Stochastic Inversion Transduction Grammars, with Application to Segmentation, Bracketing, and Alignment of Parallel Corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI-95)",
"volume": "",
"issue": "",
"pages": "1328--1335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu 1995. Stochastic Inversion Transduction Grammars, with Application to Segmentation, Brack- eting, and Alignment of Parallel Corpora, Proceed- ings of the 14th International Joint Conference on Ar- tificial Intelligence (IJCAI-95), pp. 1328-1335. Mon- treal",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A syntax-based statistical translation model",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of the 39th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Yamada and Kevin Knight 2001. A syntax-based statistical translation model, Proc. of the 39th An- nual Meeting of the Association for Computational Linguistics, France",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Figure",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "/Potential translations for source phrase s2s3 are shown by rounded boxes.",
"type_str": "figure"
}
}
}
}